Dataset Viewer
Auto-converted to Parquet Duplicate
url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
51
51
id
int64
1.14B
2.92B
node_id
stringlengths
18
18
number
int64
3.75k
7.46k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
30
created_at
timestamp[ms]
updated_at
timestamp[ms]
closed_at
timestamp[ms]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
body
stringlengths
1
47.9k
closed_by
dict
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
null
pull_request
null
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/7456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7456/comments
https://api.github.com/repos/huggingface/datasets/issues/7456/events
https://github.com/huggingface/datasets/issues/7456
2,922,676,278
I_kwDODunzps6uNIA2
7,456
.add_faiss_index and .add_elasticsearch_index returns ImportError at Google Colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/109490785?v=4", "events_url": "https://api.github.com/users/MapleBloom/events{/privacy}", "followers_url": "https://api.github.com/users/MapleBloom/followers", "following_url": "https://api.github.com/users/MapleBloom/following{/other_user}", "gists_url": "https://api.github.com/users/MapleBloom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MapleBloom", "id": 109490785, "login": "MapleBloom", "node_id": "U_kgDOBoayYQ", "organizations_url": "https://api.github.com/users/MapleBloom/orgs", "received_events_url": "https://api.github.com/users/MapleBloom/received_events", "repos_url": "https://api.github.com/users/MapleBloom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MapleBloom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MapleBloom/subscriptions", "type": "User", "url": "https://api.github.com/users/MapleBloom", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "I can fix this.\nIt's mainly because faiss-gpu requires python<=3.10 but the default python version in colab is 3.11. We just have to downgrade the CPython version down to 3.10 and it should work fine.\n", "I think I just had no chance to meet with faiss-cpu.\nIt could be import problem? \n_has_faiss gets its value at the beginning of datasets/search.\nI tried to call object before import faiss, so _has_faiss took False. And never updated later. ", "Yes you can't meet the requirements because faiss-cpu runs only on\r\npython3.10 and lower but the default version for colab is python3.11 which\r\nresults in pip not being able to find wheels for faiss-cpu with python3.11.\r\n\r\nOn Mon, 17 Mar, 2025, 3:56 pm MapleBloom, ***@***.***> wrote:\r\n\r\n> I think I just had no chance to meet with faiss-cpu.\r\n> It could be import problem?\r\n> _has_faiss gets its value at the beginning of datasets/search.\r\n> I tried to call object before import faiss, so _has_faiss took False. And\r\n> never updated later.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2728975672>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AVUSZMBVD7LEDDUGALOTVN32U2PMBAVCNFSM6AAAAABZDBA426VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRYHE3TKNRXGI>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n> [image: MapleBloom]*MapleBloom* left a comment (huggingface/datasets#7456)\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2728975672>\r\n>\r\n> I think I just had no chance to meet with faiss-cpu.\r\n> It could be import problem?\r\n> _has_faiss gets its value at the beginning of datasets/search.\r\n> I tried to call object before import faiss, so _has_faiss took False. And\r\n> never updated later.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2728975672>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AVUSZMBVD7LEDDUGALOTVN32U2PMBAVCNFSM6AAAAABZDBA426VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRYHE3TKNRXGI>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", "> you can't meet the requirements\n\nIt is not the case (or I didn't reach this point) because the same code in notebook\n```importlib.util.find_spec(\"faiss\")```\nfinds faiss. I've mention it.\nI think the problem is in the very moment when _has_faiss takes its value and never try again. \n(or it couldn't find the path that was easily found when started from my code)", "When you run the first cell containing pip install faiss-cpu does it\r\ninstall it?\r\n\r\nOn Mon, 17 Mar, 2025, 8:01 pm MapleBloom, ***@***.***> wrote:\r\n\r\n> you can't meet the requirements\r\n>\r\n> It is not the case (or I didn't reach this point) because the same code in\r\n> notebook\r\n> importlib.util.find_spec(\"faiss\")\r\n> finds faiss. I've mention it.\r\n> I think the problem is in the very moment when _has_faiss takes its value\r\n> and never try again.\r\n> (or it couldn't find the path that was easily found when started from my\r\n> code)\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2729737414>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AVUSZMCCE6BPZCOVAWXKIY32U3MFVAVCNFSM6AAAAABZDBA426VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRZG4ZTONBRGQ>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n> [image: MapleBloom]*MapleBloom* left a comment (huggingface/datasets#7456)\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2729737414>\r\n>\r\n> you can't meet the requirements\r\n>\r\n> It is not the case (or I didn't reach this point) because the same code in\r\n> notebook\r\n> importlib.util.find_spec(\"faiss\")\r\n> finds faiss. I've mention it.\r\n> I think the problem is in the very moment when _has_faiss takes its value\r\n> and never try again.\r\n> (or it couldn't find the path that was easily found when started from my\r\n> code)\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2729737414>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AVUSZMCCE6BPZCOVAWXKIY32U3MFVAVCNFSM6AAAAABZDBA426VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRZG4ZTONBRGQ>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", "> When you run the first cell containing pip install faiss-cpu does it\n> install it?\n> […](#)\n\nYes. It was installed succesfully. \nMethods of datasets library that depends on _has_faiss constant didn't start to work." ]
2025-03-16T00:51:49
2025-03-16T08:34:40
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug At Google Colab ```!pip install faiss-cpu``` works ```import faiss``` no error but ```embeddings_dataset.add_faiss_index(column='embeddings')``` returns ``` [/usr/local/lib/python3.11/dist-packages/datasets/search.py](https://localhost:8080/#) in init(self, device, string_factory, metric_type, custom_index) 247 self.faiss_index = custom_index 248 if not _has_faiss: --> 249 raise ImportError( 250 "You must install Faiss to use FaissIndex. To do so you can run conda install -c pytorch faiss-cpu or conda install -c pytorch faiss-gpu. " 251 "A community supported package is also available on pypi: pip install faiss-cpu or pip install faiss-gpu. " ``` because ```_has_faiss = importlib.util.find_spec("faiss") is not None``` at the beginning of ```datasets/search.py``` returns ```False``` when the same code at colab notebook returns ```ModuleSpec(name='faiss', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7b7851449f50>, origin='/usr/local/lib/python3.11/dist-packages/faiss/init.py', submodule_search_locations=['/usr/local/lib/python3.11/dist-packages/faiss'])``` But ``` import datasets datasets.search._has_faiss ``` at ```colab notebook``` also returns ```False``` The same story with ```_has_elasticsearch``` ### Steps to reproduce the bug 1. Follow https://huggingface.co/learn/nlp-course/chapter5/6?fw=pt at Google Colab 2. till ```embeddings_dataset.add_faiss_index(column='embeddings')``` 3. ```embeddings_dataset.add_elasticsearch_index(column='embeddings')``` 4. https://colab.research.google.com/drive/1h2cjuiClblqzbNQgrcoLYOC8zBqTLLcv#scrollTo=3ddzRp72auOF ### Expected behavior I've only started Tutorial and don't know exactly. But something tells me that ```embeddings_dataset.add_faiss_index(column='embeddings')``` should work without ```Import Error``` ### Environment info Google Colab notebook with default config
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7456/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7456/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7455/comments
https://api.github.com/repos/huggingface/datasets/issues/7455/events
https://github.com/huggingface/datasets/issues/7455
2,921,933,250
I_kwDODunzps6uKSnC
7,455
Problems with local dataset after upgrade from 3.3.2 to 3.4.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/60151338?v=4", "events_url": "https://api.github.com/users/andjoer/events{/privacy}", "followers_url": "https://api.github.com/users/andjoer/followers", "following_url": "https://api.github.com/users/andjoer/following{/other_user}", "gists_url": "https://api.github.com/users/andjoer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andjoer", "id": 60151338, "login": "andjoer", "node_id": "MDQ6VXNlcjYwMTUxMzM4", "organizations_url": "https://api.github.com/users/andjoer/orgs", "received_events_url": "https://api.github.com/users/andjoer/received_events", "repos_url": "https://api.github.com/users/andjoer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andjoer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andjoer/subscriptions", "type": "User", "url": "https://api.github.com/users/andjoer", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! I just released 3.4.1 with a fix, let me know if it's working now !" ]
2025-03-15T09:22:50
2025-03-15T09:23:55
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I was not able to open a local saved dataset anymore that was created using an older datasets version after the upgrade yesterday from datasets 3.3.2 to 3.4.0 The traceback is ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/arrow/arrow.py", line 67, in _generate_tables batches = pa.ipc.open_stream(f) File "/usr/local/lib/python3.10/dist-packages/pyarrow/ipc.py", line 190, in open_stream return RecordBatchStreamReader(source, options=options, File "/usr/local/lib/python3.10/dist-packages/pyarrow/ipc.py", line 52, in __init__ self._open(source, options=options, memory_pool=memory_pool) File "pyarrow/ipc.pxi", line 1006, in pyarrow.lib._RecordBatchStreamReader._open File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Expected to read 538970747 metadata bytes, but only read 2126 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1855, in _prepare_split_single for _, table in generator: File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/arrow/arrow.py", line 69, in _generate_tables reader = pa.ipc.open_file(f) File "/usr/local/lib/python3.10/dist-packages/pyarrow/ipc.py", line 234, in open_file return RecordBatchFileReader( File "/usr/local/lib/python3.10/dist-packages/pyarrow/ipc.py", line 110, in __init__ self._open(source, footer_offset=footer_offset, File "pyarrow/ipc.pxi", line 1090, in pyarrow.lib._RecordBatchFileReader._open File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Not an Arrow file ``` ### Steps to reproduce the bug Load a dataset from a local folder with ``` dataset = load_dataset( args.train_data_dir, cache_dir=args.cache_dir, ) ``` as it is done for example in the training script for SD3 controlnet. This is the minimal script to test it: ``` from datasets import load_dataset def main(): dataset = load_dataset( "local_dataset", ) print(dataset) print("Sample data:", dataset["train"][0]) if __name__ == "__main__": main() ```` ### Expected behavior Work in 3.4.0 like in 3.3.2 ### Environment info - `datasets` version: 3.4.0 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.29.3 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7455/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7455/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7449/comments
https://api.github.com/repos/huggingface/datasets/issues/7449/events
https://github.com/huggingface/datasets/issues/7449
2,916,235,092
I_kwDODunzps6t0jdU
7,449
Cannot load data with different schemas from different parquet files
{ "avatar_url": "https://avatars.githubusercontent.com/u/39846316?v=4", "events_url": "https://api.github.com/users/li-plus/events{/privacy}", "followers_url": "https://api.github.com/users/li-plus/followers", "following_url": "https://api.github.com/users/li-plus/following{/other_user}", "gists_url": "https://api.github.com/users/li-plus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/li-plus", "id": 39846316, "login": "li-plus", "node_id": "MDQ6VXNlcjM5ODQ2MzE2", "organizations_url": "https://api.github.com/users/li-plus/orgs", "received_events_url": "https://api.github.com/users/li-plus/received_events", "repos_url": "https://api.github.com/users/li-plus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/li-plus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/li-plus/subscriptions", "type": "User", "url": "https://api.github.com/users/li-plus", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! `load_dataset` expects all the data_files to have the same schema.\n\nMaybe you can try enforcing certain `features` using:\n\n```python\nfeatures = Features({\"conversations\": {'content': Value('string'), 'role': Value('string',)}})\nds = load_dataset(..., features=features)\n```", "Thanks! It works if I explicitly specify all nested fields of the data." ]
2025-03-13T08:14:49
2025-03-13T11:19:06
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Cannot load samples with optional fields from different files. The schema cannot be correctly derived. ### Steps to reproduce the bug When I place two samples with an optional field `some_extra_field` within a single parquet file, it can be loaded via `load_dataset`. ```python import pandas as pd from datasets import load_dataset data = [ {'conversations': {'role': 'user', 'content': 'hello'}}, {'conversations': {'role': 'user', 'content': 'hi', 'some_extra_field': 'some_value'}} ] df = pd.DataFrame(data) df.to_parquet('data.parquet') dataset = load_dataset('parquet', data_files='data.parquet', split='train') print(dataset.features) ``` The schema can be derived. `some_extra_field` is set to None for the first row where it is absent. ``` {'conversations': {'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None), 'some_extra_field': Value(dtype='string', id=None)}} ``` However, when I separate the samples into different files, it cannot be loaded. ```python import pandas as pd from datasets import load_dataset data1 = [{'conversations': {'role': 'user', 'content': 'hello'}}] pd.DataFrame(data1).to_parquet('data1.parquet') data2 = [{'conversations': {'role': 'user', 'content': 'hi', 'some_extra_field': 'some_value'}}] pd.DataFrame(data2).to_parquet('data2.parquet') dataset = load_dataset('parquet', data_files=['data1.parquet', 'data2.parquet'], split='train') print(dataset.features) ``` Traceback: ``` Traceback (most recent call last): File "/home/tiger/.local/lib/python3.9/site-packages/datasets/builder.py", line 1854, in _prepare_split_single for _, table in generator: File "/home/tiger/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table pa_table = table_cast(pa_table, self.info.features.arrow_schema) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema arrays = [ File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp> cast_array_to_feature( File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2108, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}") TypeError: Couldn't cast array of type struct<content: string, role: string, some_extra_field: string> to {'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)} ``` ### Expected behavior Correctly load data with optional fields from different parquet files. ### Environment info - `datasets` version: 3.3.2 - Platform: Linux-5.10.135.bsk.4-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - `huggingface_hub` version: 0.28.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7449/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7449/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7448/comments
https://api.github.com/repos/huggingface/datasets/issues/7448/events
https://github.com/huggingface/datasets/issues/7448
2,916,025,762
I_kwDODunzps6tzwWi
7,448
`datasets.disable_caching` doesn't work
{ "avatar_url": "https://avatars.githubusercontent.com/u/35629974?v=4", "events_url": "https://api.github.com/users/UCC-team/events{/privacy}", "followers_url": "https://api.github.com/users/UCC-team/followers", "following_url": "https://api.github.com/users/UCC-team/following{/other_user}", "gists_url": "https://api.github.com/users/UCC-team/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/UCC-team", "id": 35629974, "login": "UCC-team", "node_id": "MDQ6VXNlcjM1NjI5OTc0", "organizations_url": "https://api.github.com/users/UCC-team/orgs", "received_events_url": "https://api.github.com/users/UCC-team/received_events", "repos_url": "https://api.github.com/users/UCC-team/repos", "site_admin": false, "starred_url": "https://api.github.com/users/UCC-team/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/UCC-team/subscriptions", "type": "User", "url": "https://api.github.com/users/UCC-team", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "cc" ]
2025-03-13T06:40:12
2025-03-13T06:40:12
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
When I use `Dataset.from_generator(my_gen)` to load my dataset, it simply skips my changes to the generator function. I tried `datasets.disable_caching`, but it doesn't work!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7448/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7448/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7447/comments
https://api.github.com/repos/huggingface/datasets/issues/7447/events
https://github.com/huggingface/datasets/issues/7447
2,915,233,248
I_kwDODunzps6twu3g
7,447
Epochs shortened after resuming mid-epoch with Iterable dataset+StatefulDataloader(persistent_workers=True)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4356534?v=4", "events_url": "https://api.github.com/users/dhruvdcoder/events{/privacy}", "followers_url": "https://api.github.com/users/dhruvdcoder/followers", "following_url": "https://api.github.com/users/dhruvdcoder/following{/other_user}", "gists_url": "https://api.github.com/users/dhruvdcoder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dhruvdcoder", "id": 4356534, "login": "dhruvdcoder", "node_id": "MDQ6VXNlcjQzNTY1MzQ=", "organizations_url": "https://api.github.com/users/dhruvdcoder/orgs", "received_events_url": "https://api.github.com/users/dhruvdcoder/received_events", "repos_url": "https://api.github.com/users/dhruvdcoder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dhruvdcoder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhruvdcoder/subscriptions", "type": "User", "url": "https://api.github.com/users/dhruvdcoder", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting ! Maybe we should store the epoch in the state_dict, and then when the dataset is iterated on again after setting a new epoch it should restart from scratch instead of resuming ? wdyt ?", "But why does this only happen when `persistent_workers=True`? I would expect it to work correctly even without storing the epoch number in the state_dict of the iterable dataset. ", "I think persistent_workers=False simply ignores the dataset state_dict when it starts a new epoch, that's why the issue doesn't appear in that case", "I opened https://github.com/huggingface/datasets/pull/7451 to fix the issue, let me know if it works for you", "I just released `datasets` 3.4 that includes the fix :)\n\nPS: in your script you probably want to set the epoch like this, otherwise it's still set to 0 after the first epoch:\n\n```diff\n if state_dict is None:\n- ds.set_epoch(epoch)\n epoch += 1\n+ ds.set_epoch(epoch)\n```" ]
2025-03-12T21:41:05
2025-03-14T17:26:59
2025-03-14T10:50:10
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug When `torchdata.stateful_dataloader.StatefulDataloader(persistent_workers=True)` the epochs after resuming only iterate through the examples that were left in the epoch when the training was interrupted. For example, in the script below training is interrupted on step 124 (epoch 1) when 3 batches are left. Then after resuming, the rest of epochs (2 and 3) only iterate through these 3 batches. ### Steps to reproduce the bug Run the following script with and with PERSISTENT_WORKERS=true. ```python # !/usr/bin/env python3 # torch==2.5.1 # datasets==3.3.2 # torchdata>=0.9.0 import datasets import pprint from torchdata.stateful_dataloader import StatefulDataLoader import os PERSISTENT_WORKERS = ( os.environ.get("PERSISTENT_WORKERS", "False").lower() == "true" ) # PERSISTENT_WORKERS = True # Incorrect resume # ds = datasets.load_from_disk("dataset").to_iterable_dataset(num_shards=4) def generator(): for i in range(128): yield {"x": i} ds = datasets.Dataset.from_generator( generator, features=datasets.Features({"x": datasets.Value("int32")}) ).to_iterable_dataset(num_shards=4) dl = StatefulDataLoader( ds, batch_size=2, num_workers=2, persistent_workers=PERSISTENT_WORKERS ) global_step = 0 epoch = 0 ds_state_dict = None state_dict = None resumed = False while True: if epoch >= 3: break if state_dict is not None: dl.load_state_dict(state_dict) state_dict = None ds_state_dict = None resumed = True print("resumed") for i, batch in enumerate(dl): print(f"epoch: {epoch}, global_step: {global_step}, batch: {batch}") global_step += 1 # consume datapoint # simulate error if global_step == 124 and not resumed: ds_state_dict = ds.state_dict() state_dict = dl.state_dict() print("checkpoint") print("ds_state_dict") pprint.pprint(ds_state_dict) print("dl_state_dict") pprint.pprint(state_dict) break if state_dict is None: ds.set_epoch(epoch) epoch += 1 ``` The script checkpoints when there are three batches left in the second epoch. After resuming, only the last three batches are repeated in the rest of the epochs. If it helps, following are the two state_dicts for the dataloader save at the same step with the two settings. The left one is for `PERSISTENT_WORKERS=False` ![Image](https://github.com/user-attachments/assets/c97d6502-d7bd-4ef4-ae2d-66fe1a9732b1) ### Expected behavior All the elements in the dataset should be iterated through in the epochs following the one where we resumed. The expected behavior can be seen by setting `PERSISTENT_WORKERS=False`. ### Environment info torch==2.5.1 datasets==3.3.2 torchdata>=0.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7447/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7447/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7446/comments
https://api.github.com/repos/huggingface/datasets/issues/7446/events
https://github.com/huggingface/datasets/issues/7446
2,913,050,552
I_kwDODunzps6toZ-4
7,446
pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int'
{ "avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4", "events_url": "https://api.github.com/users/rangehow/events{/privacy}", "followers_url": "https://api.github.com/users/rangehow/followers", "following_url": "https://api.github.com/users/rangehow/following{/other_user}", "gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rangehow", "id": 88258534, "login": "rangehow", "node_id": "MDQ6VXNlcjg4MjU4NTM0", "organizations_url": "https://api.github.com/users/rangehow/orgs", "received_events_url": "https://api.github.com/users/rangehow/received_events", "repos_url": "https://api.github.com/users/rangehow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rangehow/subscriptions", "type": "User", "url": "https://api.github.com/users/rangehow", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2025-03-12T07:48:37
2025-03-12T07:48:37
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug A dict with its keys are all str but get following error ```python test_data=[{'input_ids':[1,2,3],'labels':[[Counter({2:1})]]}] dataset = datasets.Dataset.from_list(test_data) ``` ```bash pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int' ``` ### Steps to reproduce the bug . ### Expected behavior . ### Environment info datasets 3.3.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7446/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7446/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7444/comments
https://api.github.com/repos/huggingface/datasets/issues/7444/events
https://github.com/huggingface/datasets/issues/7444
2,911,202,445
I_kwDODunzps6thWyN
7,444
Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP.
{ "avatar_url": "https://avatars.githubusercontent.com/u/4356534?v=4", "events_url": "https://api.github.com/users/dhruvdcoder/events{/privacy}", "followers_url": "https://api.github.com/users/dhruvdcoder/followers", "following_url": "https://api.github.com/users/dhruvdcoder/following{/other_user}", "gists_url": "https://api.github.com/users/dhruvdcoder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dhruvdcoder", "id": 4356534, "login": "dhruvdcoder", "node_id": "MDQ6VXNlcjQzNTY1MzQ=", "organizations_url": "https://api.github.com/users/dhruvdcoder/orgs", "received_events_url": "https://api.github.com/users/dhruvdcoder/received_events", "repos_url": "https://api.github.com/users/dhruvdcoder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dhruvdcoder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhruvdcoder/subscriptions", "type": "User", "url": "https://api.github.com/users/dhruvdcoder", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2025-03-11T16:34:39
2025-03-11T16:36:01
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I have a large dataset that I shared into 1024 shards and save on the disk during pre-processing. During training, I load the dataset using load_from_disk() and convert it into an iterable dataset, shuffle it and split the shards to different DDP nodes using the recommended method. However, when the training is resumed mid-epoch, I get thousands of identical warning messages: ``` Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. ``` ### Steps to reproduce the bug 1. Run a multi-node training job using the following python script and interrupt the training after a few seconds to save a mid-epoch checkpoint. ```python #!/usr/bin/env python import os import time from typing import Dict, List import torch import lightning as pl from torch.utils.data import DataLoader from datasets import Dataset from datasets.distributed import split_dataset_by_node import datasets from transformers import AutoTokenizer from more_itertools import flatten, chunked from torchdata.stateful_dataloader import StatefulDataLoader from lightning.pytorch.callbacks.on_exception_checkpoint import ( OnExceptionCheckpoint, ) datasets.logging.set_verbosity_debug() def dummy_generator(): # Generate 60 examples: integers from $0$ to $59$ # 64 sequences of different lengths dataset = [ list(range(3, 10)), list(range(10, 15)), list(range(15, 21)), list(range(21, 27)), list(range(27, 31)), list(range(31, 36)), list(range(36, 45)), list(range(45, 50)), ] for i in range(8): for j, ids in enumerate(dataset): yield {"token_ids": [idx + i * 50 for idx in ids]} def group_texts( examples: Dict[str, List[List[int]]], block_size: int, eos_token_id: int, bos_token_id: int, pad_token_id: int, ) -> Dict[str, List[List[int]]]: real_block_size = block_size - 2 # make space for bos and eos # colapse the sequences into a single list of tokens and then create blocks of real_block_size input_ids = [] attention_mask = [] for block in chunked(flatten(examples["token_ids"]), real_block_size): s = [bos_token_id] + list(block) + [eos_token_id] ls = len(s) attn = [True] * ls s += [pad_token_id] * (block_size - ls) attn += [False] * (block_size - ls) input_ids.append(s) attention_mask.append(attn) return {"input_ids": input_ids, "attention_mask": attention_mask} def collate_fn(batch): return { "input_ids": torch.tensor( [item["input_ids"] for item in batch], dtype=torch.long ), "attention_mask": torch.tensor( [item["attention_mask"] for item in batch], dtype=torch.long ), } class DummyModule(pl.LightningModule): def __init__(self): super().__init__() # A dummy linear layer (not used for actual computation) self.layer = torch.nn.Linear(1, 1) self.ds = None self.prepare_data_per_node = False def on_train_start(self): # This hook is called once training begins on each process. print(f"[Rank {self.global_rank}] Training started.", flush=True) self.data_file = open(f"data_{self.global_rank}.txt", "w") def on_train_end(self): self.data_file.close() def training_step(self, batch, batch_idx): # Print batch information to verify data loading. time.sleep(5) # print("batch", batch, flush=True) print( f"\n[Rank {self.global_rank}] Training step, epoch {self.trainer.current_epoch}, batch {batch_idx}: {batch['input_ids']}", flush=True, ) self.data_file.write( f"[Rank {self.global_rank}] Training step, epoch {self.trainer.current_epoch}, batch {batch_idx}: {batch['input_ids']}\n" ) # Compute a dummy loss (here, simply a constant tensor) loss = torch.tensor(0.0, requires_grad=True) return loss def on_train_epoch_start(self): epoch = self.trainer.current_epoch print( f"[Rank {self.global_rank}] Training epoch {epoch} started.", flush=True, ) self.data_file.write( f"[Rank {self.global_rank}] Training epoch {epoch} started.\n" ) def configure_optimizers(self): # Return a dummy optimizer. return torch.optim.SGD(self.parameters(), lr=0.001) class DM(pl.LightningDataModule): def __init__(self): super().__init__() self.ds = None self.prepare_data_per_node = False def set_epoch(self, epoch: int): self.ds.set_epoch(epoch) def prepare_data(self): # download the dataset dataset = Dataset.from_generator(dummy_generator) # save the dataset dataset.save_to_disk("dataset", num_shards=4) def setup(self, stage: str): # load the dataset ds = datasets.load_from_disk("dataset").to_iterable_dataset( num_shards=4 ) ds = ds.map( group_texts, batched=True, batch_size=5, fn_kwargs={ "block_size": 5, "eos_token_id": 1, "bos_token_id": 0, "pad_token_id": 2, }, remove_columns=["token_ids"], ).shuffle(seed=42, buffer_size=8) ds = split_dataset_by_node( ds, rank=self.trainer.global_rank, world_size=self.trainer.world_size, ) self.ds = ds def train_dataloader(self): print( f"[Rank {self.trainer.global_rank}] Preparing train_dataloader...", flush=True, ) rank = self.trainer.global_rank print( f"[Rank {rank}] Global rank: {self.trainer.global_rank}", flush=True, ) world_size = self.trainer.world_size print(f"[Rank {rank}] World size: {world_size}", flush=True) return StatefulDataLoader( self.ds, batch_size=2, num_workers=2, collate_fn=collate_fn, drop_last=True, persistent_workers=True, ) if __name__ == "__main__": print("Starting Lightning training", flush=True) # Optionally, print some SLURM environment info for debugging. print(f"SLURM_NNODES: {os.environ.get('SLURM_NNODES', '1')}", flush=True) # Determine the number of nodes from SLURM (defaulting to 1 if not set) num_nodes = int(os.environ.get("SLURM_NNODES", "1")) model = DummyModule() dm = DM() on_exception = OnExceptionCheckpoint( dirpath="checkpoints", filename="on_exception", ) # Configure the Trainer to use distributed data parallel (DDP). trainer = pl.Trainer( accelerator="gpu" if torch.cuda.is_available() else "cpu", devices=1, strategy=( "ddp" if num_nodes > 1 else "auto" ), # Use DDP strategy for multi-node training. num_nodes=num_nodes, max_epochs=2, logger=False, enable_checkpointing=True, num_sanity_val_steps=0, enable_progress_bar=False, callbacks=[on_exception], ) # resume (uncomment to resume) # trainer.fit(model, datamodule=dm, ckpt_path="checkpoints/on_exception.ckpt") # train trainer.fit(model, datamodule=dm) ``` ```bash #!/bin/bash #SBATCH --job-name=pl_ddp_test #SBATCH --nodes=2 # Adjust number of nodes as needed #SBATCH --ntasks-per-node=1 # One GPU (process) per node #SBATCH --cpus-per-task=3 # At least as many dataloader workers as required #SBATCH --gres=gpu:1 # Request one GPU per node #SBATCH --time=00:10:00 # Job runtime (adjust as needed) #SBATCH --partition=gpu-preempt # Partition or queue name #SBATCH -o script.out # Disable Python output buffering. export PYTHONUNBUFFERED=1 echo "SLURM job starting on $(date)" echo "Running on nodes: $SLURM_NODELIST" echo "Current directory: $(pwd)" ls -l # Launch the script using srun so that each process starts the Lightning module. srun script.py ``` 2. Uncomment the "resume" line (second to last) and comment the original `trainer.fit` call (last line). It will produce the following log. ``` [Rank 0] Preparing train_dataloader... [Rank 0] Global rank: 0 [Rank 0] World size: 2 [Rank 1] Preparing train_dataloader... [Rank 1] Global rank: 1 [Rank 1] World size: 2 Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Assigning 2 shards (or data sources) of the dataset to each node. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#0 dataloader worker#1, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#0 dataloader worker#0, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. node#0 dataloader worker#1, ': Finished iterating over 1/1 shards. node#0 dataloader worker#0, ': Finished iterating over 1/1 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. [Rank 0] Training started. [Rank 0] Training epoch 0 started. [Rank 0] Training epoch 1 started. Assigning 2 shards (or data sources) of the dataset to each node. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#0 dataloader worker#1, ': Starting to iterate over 1/2 shards. node#0 dataloader worker#0, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#1 dataloader worker#0, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. node#0 dataloader worker#1, ': Finished iterating over 1/1 shards. node#0 dataloader worker#0, ': Finished iterating over 1/1 shards. `Trainer.fit` stopped: `max_epochs=2` reached. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. node#1 dataloader worker#1, ': Finished iterating over 1/1 shards. node#1 dataloader worker#0, ': Finished iterating over 1/1 shards. [Rank 1] Training started. [Rank 1] Training epoch 0 started. [Rank 1] Training epoch 1 started. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards. node#1 dataloader worker#0, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. node#1 dataloader worker#0, ': Finished iterating over 1/1 shards. node#1 dataloader worker#1, ': Finished iterating over 1/1 shards. ``` I'm also attaching the relevant state_dict to make sure that the state is being checkpointed as expected. ``` {'_iterator_finished': True, '_snapshot': {'_last_yielded_worker_id': 1, '_main_snapshot': {'_IterableDataset_len_called': None, '_base_seed': 3992758080362545099, '_index_sampler_state': {'samples_yielded': 64}, '_num_workers': 2, '_sampler_iter_state': None, '_sampler_iter_yielded': 32, '_shared_seed': None}, '_snapshot_step': 32, '_worker_snapshots': {'worker_0': {'dataset_state': {'ex_iterable': {'shard_example_idx': 0, 'shard_idx': 1}, 'num_examples_since_previous_state': 0, 'previous_state': {'shard_example_idx': 0, 'shard_idx': 1}, 'previous_state_example_idx': 33}, 'fetcher_state': {'dataset_iter_state': None, 'fetcher_ended': False}, 'worker_id': 0}, 'worker_1': {'dataset_state': {'ex_iterable': {'shard_example_idx': 0, 'shard_idx': 1}, 'num_examples_since_previous_state': 0, 'previous_state': {'shard_example_idx': 0, 'shard_idx': 1}, 'previous_state_example_idx': 33}, 'fetcher_state': {'dataset_iter_state': None, 'fetcher_ended': False}, 'worker_id': 1}}}, '_steps_since_snapshot': 0} ``` ### Expected behavior Since I'm following all the recommended steps, I don't expect to see any warning when resuming. Am I doing something wrong? Also, can someone explain why I'm seeing 20 identical messages in the log in this reproduction setting? I'm trying to understand why I see thousands of these messages with the actual dataset. One more surprising thing I noticed in the logs is the change in a number of shards per worker. In the following messages, the denominator changes from 2 to 1. ``` node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards. ... node#1 dataloader worker#1, ': Finished iterating over 1/1 shards. ``` ### Environment info python: 3.11.10 datasets: 3.3.2 lightning: 2.3.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7444/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7444/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7443/comments
https://api.github.com/repos/huggingface/datasets/issues/7443/events
https://github.com/huggingface/datasets/issues/7443
2,908,585,656
I_kwDODunzps6tXX64
7,443
index error when num_shards > len(dataset)
{ "avatar_url": "https://avatars.githubusercontent.com/u/17934496?v=4", "events_url": "https://api.github.com/users/eminorhan/events{/privacy}", "followers_url": "https://api.github.com/users/eminorhan/followers", "following_url": "https://api.github.com/users/eminorhan/following{/other_user}", "gists_url": "https://api.github.com/users/eminorhan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eminorhan", "id": 17934496, "login": "eminorhan", "node_id": "MDQ6VXNlcjE3OTM0NDk2", "organizations_url": "https://api.github.com/users/eminorhan/orgs", "received_events_url": "https://api.github.com/users/eminorhan/received_events", "repos_url": "https://api.github.com/users/eminorhan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eminorhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eminorhan/subscriptions", "type": "User", "url": "https://api.github.com/users/eminorhan", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Actually, looking at the code a bit more carefully, maybe an even better solution is to explicitly set `num_shards=len(self)` somewhere inside both `push_to_hub()` and `save_to_disk()` when these functions are invoked with `num_shards > len(dataset)`." ]
2025-03-10T22:40:59
2025-03-10T23:43:08
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
In `ds.push_to_hub()` and `ds.save_to_disk()`, `num_shards` must be smaller than or equal to the number of rows in the dataset, but currently this is not checked anywhere inside these functions. Attempting to invoke these functions with `num_shards > len(dataset)` should raise an informative `ValueError`. I frequently work with datasets with a small number of rows where each row is pretty large, so I often encounter this issue, where the function runs until the shard index in `ds.shard(num_shards, indx)` goes out of bounds. Ideally, a `ValueError` should be raised before reaching this point (i.e. as soon as `ds.push_to_hub()` or `ds.save_to_disk()` is invoked with `num_shards > len(dataset)`). It seems that adding something like: ```python if len(self) < num_shards: raise ValueError(f"num_shards ({num_shards}) must be smaller than or equal to the number of rows in the dataset ({len(self)}). Please either reduce num_shards or increase max_shard_size to make sure num_shards <= len(dataset).") ``` to the beginning of the definition of the `ds.shard()` function [here](https://github.com/huggingface/datasets/blob/f693f4e93aabafa878470c80fd42ddb10ec550d6/src/datasets/arrow_dataset.py#L4728) would deal with this issue for both `ds.push_to_hub()` and `ds.save_to_disk()`, but I'm not exactly sure if this is the best place to raise the `ValueError` (it seems that a more correct way to do it would be to write separate checks for `ds.push_to_hub()` and `ds.save_to_disk()`). I'd be happy to submit a PR if you think something along these lines would be acceptable.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7443/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7443/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7442/comments
https://api.github.com/repos/huggingface/datasets/issues/7442/events
https://github.com/huggingface/datasets/issues/7442
2,905,543,017
I_kwDODunzps6tLxFp
7,442
Flexible Loader
{ "avatar_url": "https://avatars.githubusercontent.com/u/13894030?v=4", "events_url": "https://api.github.com/users/dipta007/events{/privacy}", "followers_url": "https://api.github.com/users/dipta007/followers", "following_url": "https://api.github.com/users/dipta007/following{/other_user}", "gists_url": "https://api.github.com/users/dipta007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dipta007", "id": 13894030, "login": "dipta007", "node_id": "MDQ6VXNlcjEzODk0MDMw", "organizations_url": "https://api.github.com/users/dipta007/orgs", "received_events_url": "https://api.github.com/users/dipta007/received_events", "repos_url": "https://api.github.com/users/dipta007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dipta007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dipta007/subscriptions", "type": "User", "url": "https://api.github.com/users/dipta007", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Ideally `save_to_disk` should save in a format compatible with load_dataset, wdyt ?", "> Ideally `save_to_disk` should save in a format compatible with load_dataset, wdyt ?\n\nThat would be perfect if not at least a flexible loader." ]
2025-03-09T16:55:03
2025-03-13T11:15:02
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset? It can be something as simple as this one: ``` def load_hf_dataset(path_or_name): if os.path.exists(path_or_name): return load_from_disk(path_or_name) else: return load_dataset(path_or_name) ``` ### Motivation This can be done inside the user codebase, too, but in my experience, it becomes repetitive code. ### Your contribution I can open a pull request.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7442/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7442/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7441/comments
https://api.github.com/repos/huggingface/datasets/issues/7441/events
https://github.com/huggingface/datasets/issues/7441
2,904,702,329
I_kwDODunzps6tIj15
7,441
`drop_last_batch` does not drop the last batch using IterableDataset + interleave_datasets + multi_worker
{ "avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4", "events_url": "https://api.github.com/users/memray/events{/privacy}", "followers_url": "https://api.github.com/users/memray/followers", "following_url": "https://api.github.com/users/memray/following{/other_user}", "gists_url": "https://api.github.com/users/memray/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/memray", "id": 4197249, "login": "memray", "node_id": "MDQ6VXNlcjQxOTcyNDk=", "organizations_url": "https://api.github.com/users/memray/orgs", "received_events_url": "https://api.github.com/users/memray/received_events", "repos_url": "https://api.github.com/users/memray/repos", "site_admin": false, "starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/memray/subscriptions", "type": "User", "url": "https://api.github.com/users/memray", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi @memray, I’d like to help fix the issue with `drop_last_batch` not working when `num_workers > 1`. I’ll investigate and propose a solution. Thanks!\n", "Thank you very much for offering to help! I also noticed a problem related to a previous issue and left a comment [here](https://github.com/huggingface/datasets/issues/6565#issuecomment-2708169303) (the code checks the validity before certain columns removed). Can you take a look as well?" ]
2025-03-08T10:28:44
2025-03-09T21:27:33
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug See the script below `drop_last_batch=True` is defined using map() for each dataset. The last batch for each dataset is expected to be dropped, id 21-25. The code behaves as expected when num_workers=0 or 1. When using num_workers>1, 'a-11', 'b-11', 'a-12', 'b-12' are gone and instead 21 and 22 are sampled. ### Steps to reproduce the bug ``` from datasets import Dataset from datasets import interleave_datasets from torch.utils.data import DataLoader def convert_to_str(batch, dataset_name): batch['a'] = [f"{dataset_name}-{e}" for e in batch['a']] return batch def gen1(): for ii in range(1, 25): yield {"a": ii} def gen2(): for ii in range(1, 25): yield {"a": ii} # https://github.com/huggingface/datasets/issues/6565 if __name__ == '__main__': dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=2) dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=2) dataset1 = dataset1.map(lambda x: convert_to_str(x, dataset_name="a"), batched=True, batch_size=10, drop_last_batch=True) dataset2 = dataset2.map(lambda x: convert_to_str(x, dataset_name="b"), batched=True, batch_size=10, drop_last_batch=True) interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted") print(f"num_workers=0") loader = DataLoader(interleaved, batch_size=5, num_workers=0) i = 0 for b in loader: print(i, b['a']) i += 1 print('=-' * 20) print(f"num_workers=1") loader = DataLoader(interleaved, batch_size=5, num_workers=1) i = 0 for b in loader: print(i, b['a']) i += 1 print('=-' * 20) print(f"num_workers=2") loader = DataLoader(interleaved, batch_size=5, num_workers=2) i = 0 for b in loader: print(i, b['a']) i += 1 print('=-' * 20) print(f"num_workers=3") loader = DataLoader(interleaved, batch_size=5, num_workers=3) i = 0 for b in loader: print(i, b['a']) i += 1 ``` output is: ``` num_workers=0 0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3'] 1 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5'] 2 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8'] 3 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10'] 4 ['a-11', 'b-11', 'a-12', 'b-12', 'a-13'] 5 ['b-13', 'a-14', 'b-14', 'a-15', 'b-15'] 6 ['a-16', 'b-16', 'a-17', 'b-17', 'a-18'] 7 ['b-18', 'a-19', 'b-19', 'a-20', 'b-20'] =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- num_workers=1 0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3'] 1 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5'] 2 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8'] 3 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10'] 4 ['a-11', 'b-11', 'a-12', 'b-12', 'a-13'] 5 ['b-13', 'a-14', 'b-14', 'a-15', 'b-15'] 6 ['a-16', 'b-16', 'a-17', 'b-17', 'a-18'] 7 ['b-18', 'a-19', 'b-19', 'a-20', 'b-20'] =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- num_workers=2 0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3'] 1 ['a-13', 'b-13', 'a-14', 'b-14', 'a-15'] 2 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5'] 3 ['b-15', 'a-16', 'b-16', 'a-17', 'b-17'] 4 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8'] 5 ['a-18', 'b-18', 'a-19', 'b-19', 'a-20'] 6 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10'] 7 ['b-20', 'a-21', 'b-21', 'a-22', 'b-22'] =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- num_workers=3 Too many dataloader workers: 3 (max is dataset.num_shards=2). Stopping 1 dataloader workers. 0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3'] 1 ['a-13', 'b-13', 'a-14', 'b-14', 'a-15'] 2 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5'] 3 ['b-15', 'a-16', 'b-16', 'a-17', 'b-17'] 4 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8'] 5 ['a-18', 'b-18', 'a-19', 'b-19', 'a-20'] 6 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10'] 7 ['b-20', 'a-21', 'b-21', 'a-22', 'b-22'] ``` ### Expected behavior `'a-21', 'b-21', 'a-22', 'b-22'` should be dropped ### Environment info - `datasets` version: 3.3.2 - Platform: Linux-5.15.0-1056-aws-x86_64-with-glibc2.31 - Python version: 3.10.16 - `huggingface_hub` version: 0.28.0 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7441/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7441/timeline
null
null
null
null
false
End of preview. Expand in Data Studio

Dataset Card for GitHub Issues

Dataset Description

Dataset Summary

GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.

Supported Tasks and Leaderboards

For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the task-category-tag with an appropriate other:other-task-name).

  • task-category-tag: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name. The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.

Languages

Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...

When relevant, please provide BCP-47 codes, which consist of a primary language subtag, with a script subtag and/or region subtag if available.

Dataset Structure

Data Instances

Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.

{
  'example_field': ...,
  ...
}

Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.

Data Fields

List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.

  • example_field: description of example_field

Note that the descriptions can be initialized with the Show Markdown Data Fields output of the tagging app, you will then only need to refine the generated descriptions.

Data Splits

Describe and name the splits in the dataset if there are more than one.

Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.

Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:

Tain Valid Test
Input Sentences
Average Sentence Length

Dataset Creation

Curation Rationale

What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?

Source Data

This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)

Initial Data Collection and Normalization

Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.

If data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.

If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.

Who are the source language producers?

State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.

If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.

Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.

Describe other people represented or mentioned in the data. Where possible, link to references for the information.

Annotations

If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.

Annotation process

If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.

Who are the annotators?

If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.

Describe the people or systems who originally created the annotations and their selection criteria if applicable.

If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.

Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.

Personal and Sensitive Information

State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).

State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).

If efforts were made to anonymize the data, describe the anonymization process.

Considerations for Using the Data

Social Impact of Dataset

Please discuss some of the ways you believe the use of this dataset will impact society.

The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.

Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.

Discussion of Biases

Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.

For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.

If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.

Other Known Limitations

If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.

Additional Information

Dataset Curators

List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.

Licensing Information

Provide the license and link to the license webpage if available.

Downloads last month
6