Datasets:
Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
TinyAya LID — Models, Eval Data & Training Artifacts
Artifacts for the Contrastive UniLID project: language identification using LLM tokenizer vocabularies (TinyAya 261k BPE→Unigram), trained on GlotLID-C, evaluated on CommonLID.
Source code: github.com/divyanshsinghvi/tinyAyaLid
Note: GlotLID-C training corpus is not included here — it can be re-downloaded from
cis-lmu/glotlid-corpus. This repo only ships the eval data, models, training weights, and LLM cache.
Structure
.
├── models/ # Trained .unilid model files + eval JSONs
│ ├── tinyaya_v3_200k/ # Best TinyAya model — 200k samples/lang
│ ├── tinyaya_v3_100k/ # TinyAya, 100k samples/lang
│ ├── tinyaya_soft_full/ # TinyAya, full GlotLID-C corpus
│ ├── mistral_v3_200k/ # Mistral-Nemo 131k tokenizer comparison
│ ├── scratch_v3_200k/ # Scratch 100k vocab comparison
│ ├── commonlid_20pct/ # Trained on 20% CommonLID split (TinyAya)
│ ├── commonlid_50pct/ # Trained on 50% CommonLID split (TinyAya)
│ ├── commonlid_20pct_mistral/ # 20% CommonLID split (Mistral)
│ ├── commonlid_50pct_mistral/ # 50% CommonLID split (Mistral)
│ ├── commonlid_20pct_scratch/ # 20% CommonLID split (Scratch)
│ └── commonlid_50pct_scratch/ # 50% CommonLID split (Scratch)
│
├── data/
│ ├── commonlid/ # CommonLID evaluation corpus (fastText format)
│ │ ├── commonlid_full.txt # Full test set (373k samples, 109 tags)
│ │ ├── commonlid_train.txt # Train split
│ │ ├── commonlid_test.txt # Test split
│ │ ├── commonlid_50pct_test.txt # 50% split
│ │ ├── commonlid_80pct_test.txt # 80% split
│ │ ├── commonlid_50perlang.txt # 50 samples/lang subsample
│ │ ├── commonlid_150perlang.txt # 150 samples/lang subsample
│ │ ├── commonlid_200perlang.txt # 200 samples/lang subsample
│ │ ├── commonlid_20pct_by_lang/ # Per-language files (20pct split)
│ │ └── commonlid_50pct_by_lang/ # Per-language files (50pct split)
│ │
│ └── misc/ # Small training experiment files
│ ├── train_quick.txt
│ ├── train_quick_test.txt
│ ├── train_1k.txt
│ ├── train_1k_test.txt
│ └── train_test.txt
│
├── training_weights/ # Per-language unigram log-prob dists from soft EM (compressed)
│ └── *.tar.gz # One tarball per experiment config
│
└── cache/ # Cached LLM API responses (two-stage eval)
└── cache.tar.gz
Data Formats
- fastText format (
__label__<lang_Script> <text>): all CommonLID files - Plain text (one sentence per line): misc training files
Languages
- CommonLID eval: 109 language tags (373,230 samples in
commonlid_full.txt) - Alias mapping (CommonLID→model individual code):
ara→arb, aze→azj, bik→bcl, est→ekk, lav→lvs, mlg→plt, msa→zsm, orm→gaz, swa→swh, tgl→fil, uzb→uzn, zho→cmn
Reproducing Training
To retrain a model, download GlotLID-C separately:
from datasets import load_dataset
ds = load_dataset("cis-lmu/glotlid-corpus")
Then run train.py from the source repo using the desired tokenizer.
Contributors
Divyansh Singhvi, Megha Agarwal. Mentored by Julia Kreutzer.
- Downloads last month
- 267