Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 298, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators
inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
File "pyarrow/table.pxi", line 5317, in pyarrow.lib.concat_tables
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Must pass at least one table
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 303, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
This dataset is utilized for retrieval-augmented generation (RAG). The WikiExtractor tool is used to extract and clean text from Wikipedia dumps to construct a knowledge base.
Citations
@article{chen2025geofactory,
title={GeoFactory: An LLM Performance Enhancement Framework for Geoscience Factual and Inferential Tasks},
author={Chen, Zhou and Wang, Xiao and Zhang, Xinan and Lin, Ming and Liao, Yuanhong and Li, Juanzi and Bai, Yuqi},
journal={Big Earth Data},
year={2025},
month={May},
pages={1--33},
doi={10.1080/20964471.2025.2506291}
}
- Downloads last month
- 32