url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
51
51
id
int64
1.14B
2.92B
node_id
stringlengths
18
18
number
int64
3.75k
7.46k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
listlengths
0
30
created_at
timestamp[ms]
updated_at
timestamp[ms]
closed_at
timestamp[ms]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
body
stringlengths
1
47.9k
closed_by
dict
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
null
pull_request
null
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/7167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7167/comments
https://api.github.com/repos/huggingface/datasets/issues/7167/events
https://github.com/huggingface/datasets/issues/7167
2,546,708,014
I_kwDODunzps6Xy64u
7,167
Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers
{ "avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4", "events_url": "https://api.github.com/users/Night1099/events{/privacy}", "followers_url": "https://api.github.com/users/Night1099/followers", "following_url": "https://api.github.com/users/Night1099/following{/other_user}", "gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Night1099", "id": 90132896, "login": "Night1099", "node_id": "MDQ6VXNlcjkwMTMyODk2", "organizations_url": "https://api.github.com/users/Night1099/orgs", "received_events_url": "https://api.github.com/users/Night1099/received_events", "repos_url": "https://api.github.com/users/Night1099/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Night1099/subscriptions", "type": "User", "url": "https://api.github.com/users/Night1099", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "this is happening on large datasets, if anyone happens upon this i was able to fix by changing\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)\r\n```\r\n\r\nto\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, batch_size=16, new_fingerprint=new_fingerprint)\r\n```" ]
2024-09-25T01:39:51
2024-09-30T05:28:15
2024-09-30T05:28:04
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug ``` Map: 6%|██████ | 8000/138120 [19:27<5:16:36, 6.85 examples/s] Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <module> main(args) File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1132, in main train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 560, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3035, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3461, in _map_single writer.write_batch(batch) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 567, in write_batch self.write_table(pa_table, writer_batch_size) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 579, in write_table pa_table = pa_table.combine_chunks() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 4387, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 1174, in launch_command simple_launcher(args) File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 769, in simple_launcher ``` ### Steps to reproduce the bug The dataset has no problem training on sd1.5 controlnet train script ### Expected behavior Script not randomly erroing with error above ### Environment info - `datasets` version: 3.0.0 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1 training on A100
{ "avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4", "events_url": "https://api.github.com/users/Night1099/events{/privacy}", "followers_url": "https://api.github.com/users/Night1099/followers", "following_url": "https://api.github.com/users/Night1099/following{/other_user}", "gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Night1099", "id": 90132896, "login": "Night1099", "node_id": "MDQ6VXNlcjkwMTMyODk2", "organizations_url": "https://api.github.com/users/Night1099/orgs", "received_events_url": "https://api.github.com/users/Night1099/received_events", "repos_url": "https://api.github.com/users/Night1099/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Night1099/subscriptions", "type": "User", "url": "https://api.github.com/users/Night1099", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7167/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7167/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7164/comments
https://api.github.com/repos/huggingface/datasets/issues/7164/events
https://github.com/huggingface/datasets/issues/7164
2,544,757,297
I_kwDODunzps6Xreox
7,164
fsspec.exceptions.FSTimeoutError when downloading dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38216460?v=4", "events_url": "https://api.github.com/users/timonmerk/events{/privacy}", "followers_url": "https://api.github.com/users/timonmerk/followers", "following_url": "https://api.github.com/users/timonmerk/following{/other_user}", "gists_url": "https://api.github.com/users/timonmerk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timonmerk", "id": 38216460, "login": "timonmerk", "node_id": "MDQ6VXNlcjM4MjE2NDYw", "organizations_url": "https://api.github.com/users/timonmerk/orgs", "received_events_url": "https://api.github.com/users/timonmerk/received_events", "repos_url": "https://api.github.com/users/timonmerk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timonmerk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timonmerk/subscriptions", "type": "User", "url": "https://api.github.com/users/timonmerk", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! If you check the dataset loading script [here](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py) you'll see that it downloads the data from OpenSLR, and apparently their storage has timeout issues. It would be great to ultimately host the dataset on Hugging Face instead.\r\n\r\nIn the meantime I can only recommend to try again later :/", "Ok, still many thanks!", "I'm also getting this same error but for `CSTR-Edinburgh/vctk`, so I don't think it's the remote host that's timing out, since I also time out at exactly 5 minutes. It seems there is a universal fsspec timeout that's getting hit starting in v3.", "in v3 we cleaned the download parts of the library to make it more robust for HF downloads and to simplify support of script-based datasets. As a side effect it's not the same code that is used for other hosts, maybe time out handling changed. Anyway it should be possible to tweak fsspec to use retries\r\n\r\nFor example using [aiohttp_retry](https://github.com/inyutin/aiohttp_retry) maybe (haven't tried) ?\r\n\r\n```python\r\nimport fsspec\r\nfrom aiohttp_retry import RetryClient\r\n\r\nfsspec.filesystem(\"http\")._session = RetryClient()\r\n```\r\n\r\nrelated topic : https://github.com/huggingface/datasets/issues/7175", "Adding a timeout argument to the `fs.get_file` call in `fsspec_get` in `datasets/utils/file_utils.py` might fix this ([source code](https://github.com/huggingface/datasets/blob/65f6eb54aa0e8bb44cea35deea28e0e8fecc25b9/src/datasets/utils/file_utils.py#L330)):\r\n\r\n```python\r\nfs.get_file(path, temp_file.name, callback=callback, timeout=3600)\r\n```\r\n\r\nSetting `timeout=1` fails after about one second, so setting it to 3600 should give us 1h. Havn't really tested this though. I'm also not sure what implications this has and if it causes errors for other `fs` implementations/configurations.\r\n\r\nThis is using `datasets==3.0.1` and Python 3.11.6.\r\n\r\n---\r\n\r\nEdit: This doesn't seem to change the timeout time, but add a second timeout counter (probably in `fsspec/asyn.py/sync`). So one can reduce the time for downloading like this, but not expand.\r\n\r\n---\r\n\r\nEdit 2: `fs` is of type `fsspec.implementations.http.HTTPFileSystem` which initializes a `aiohttp.ClientSession` using `client_kwargs`. We can pass these when calling `load_dataset`.\r\n\r\n**TLDR; This fixes it:**\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```", "I've handled the issue like this to ensure smoother downloads when using the `datasets` library. \nIf modifying the library is not too inconvenient, this approach could be a good (but tentative) solution.\n\n### Changes Made\n\nModified `datasets.utils.file_utils.fsspec_get` to handle storage options and set a timeout:\n\n```python\ndef fsspec_get(url, temp_file, storage_options=None, desc=None, disable_tqdm=False):\n\n # ---> [ADD]\n if storage_options is None:\n storage_options = {}\n if \"client_kwargs\" not in storage_options:\n storage_options[\"client_kwargs\"] = {}\n storage_options[\"client_kwargs\"][\"timeout\"] = aiohttp.ClientTimeout(total=3600)\n # <---\n\n # The rest of the original code remains unchanged" ]
2024-09-24T08:45:05
2025-01-14T09:48:23
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data. ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("librispeech_asr", "clean") ``` The output is as follows: > Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:06, 13.2MB/s]Traceback (most recent call last): > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 56, in _runner > result[0] = await coro > ^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/implementations/http.py", line 262, in _get_file > chunk = await r.content.read(chunk_size) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 393, in read > await self._wait("read") > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 311, in _wait > with self._timer: > ^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 713, in __exit__ > raise asyncio.TimeoutError from None > TimeoutError > > The above exception was the direct cause of the following exception: > > Traceback (most recent call last): > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/load_dataset.py", line 3, in <module> > datasets.load_dataset("librispeech_asr", "clean") > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/load.py", line 2096, in load_dataset > builder_instance.download_and_prepare( > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 924, in download_and_prepare > self._download_and_prepare( > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1647, in _download_and_prepare > super()._download_and_prepare( > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 977, in _download_and_prepare > split_generators = self._split_generators(dl_manager, **split_generators_kwargs) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/2712a8f82f0d20807a56faadcd08734f9bdd24c850bb118ba21ff33ebff0432f/librispeech_asr.py", line 115, in _split_generators > archive_path = dl_manager.download(_DL_URLS[self.config.name]) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 159, in download > downloaded_path_or_paths = map_nested( > ^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 512, in map_nested > _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 380, in _single_map_nested > return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] > ^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 216, in _download_batched > self._download_single(url_or_filename, download_config=download_config) > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 225, in _download_single > out = cached_path(url_or_filename, download_config=download_config) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 205, in cached_path > output_path = get_from_cache( > ^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 415, in get_from_cache > fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm) > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 334, in fsspec_get > fs.get_file(path, temp_file.name, callback=callback) > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 118, in wrapper > return sync(self.loop, func, *args, **kwargs) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 101, in sync > raise FSTimeoutError from return_result > fsspec.exceptions.FSTimeoutError > Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:09, 13.0MB/s] ### Expected behavior Complete the download ### Environment info Python version 3.12.6 Dependencies: > dependencies = [ > "accelerate>=0.34.2", > "datasets[audio]>=3.0.0", > "ipython>=8.18.1", > "librosa>=0.10.2.post1", > "torch>=2.4.1", > "torchaudio>=2.4.1", > "transformers>=4.44.2", > ] MacOS 14.6.1 (23G93)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7164/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7164/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7163/comments
https://api.github.com/repos/huggingface/datasets/issues/7163/events
https://github.com/huggingface/datasets/issues/7163
2,542,361,234
I_kwDODunzps6XiVqS
7,163
Set explicit seed in iterable dataset ddp shuffling example
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "thanks for reporting !" ]
2024-09-23T11:34:06
2024-09-24T14:40:15
2024-09-24T14:40:15
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset the ddp example shuffles without seeding ```python from datasets.distributed import split_dataset_by_node ids = ds.to_iterable_dataset(num_shards=512) ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating for example in ids: pass ``` This code would - I think - raise an error due to the lack of an explicit seed: https://github.com/huggingface/datasets/blob/2eb4edb97e1a6af2ea62738ec58afbd3812fc66e/src/datasets/iterable_dataset.py#L1707-L1711 ### Steps to reproduce the bug Run example code ### Expected behavior Add explicit seeding to example code ### Environment info latest datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7163/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7163/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7161/comments
https://api.github.com/repos/huggingface/datasets/issues/7161/events
https://github.com/huggingface/datasets/issues/7161
2,541,971,931
I_kwDODunzps6Xg2nb
7,161
JSON lines with empty struct raise ArrowTypeError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-23T08:48:56
2024-09-25T04:43:44
2024-09-23T11:30:07
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 > ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_count: int64, update_count: int64, citation_needed_count: int64> Related to: - #7159
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7161/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7161/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7159/comments
https://api.github.com/repos/huggingface/datasets/issues/7159/events
https://github.com/huggingface/datasets/issues/7159
2,541,865,613
I_kwDODunzps6XgcqN
7,159
JSON lines with missing struct fields raise TypeError: Couldn't cast array
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Hello,\r\n\r\nI have still the same issue when loading the dataset with the new version:\r\n[https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5](https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5)\r\n\r\nI have downloaded and unzipped the wikimedia/structured-wikipedia dataset locally but when loading I have the same issue.\r\n\r\n```\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"/gpfsdsdir/dataset/HuggingFace/wikimedia/structured-wikipedia/20240916.fr\")\r\n```\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<content_url: string, width: int64, height: int64, alternative_text: string>\r\nto\r\n{'content_url': Value(dtype='string', id=None), 'width': Value(dtype='int64', id=None), 'height': Value(dtype='int64', id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n```\r\nMy version of datasets is 3.0.1" ]
2024-09-23T07:57:58
2024-10-21T08:07:07
2024-09-23T11:09:18
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
JSON lines with missing struct fields raise TypeError: Couldn't cast array of type. See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 One would expect that the struct missing fields are added with null values.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7159/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7159/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7156/comments
https://api.github.com/repos/huggingface/datasets/issues/7156/events
https://github.com/huggingface/datasets/issues/7156
2,539,360,617
I_kwDODunzps6XW5Fp
7,156
interleave_datasets resets shuffle state
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "It also does preserve `split_by_node`, so in the meantime you should call `shuffle` or `split_by_node` AFTER `interleave_datasets` or `concatenate_datasets`" ]
2024-09-20T17:57:54
2024-09-20T17:57:54
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug ``` import datasets import torch.utils.data def gen(shards): yield {"shards": shards} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={'shards': list(range(25))} ) dataset = dataset.shuffle(buffer_size=1) dataset = datasets.interleave_datasets( [dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted" ) dataloader = torch.utils.data.DataLoader( dataset, batch_size=8, num_workers=8, ) for i, batch in enumerate(dataloader): print(batch) if i >= 10: break if __name__ == "__main__": main() ``` ### Steps to reproduce the bug Run the script, it will output ``` {'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]} {'shards': [tensor([ 1, 9, 17, 1, 9, 17, 1, 9])]} {'shards': [tensor([ 2, 10, 18, 2, 10, 18, 2, 10])]} {'shards': [tensor([ 3, 11, 19, 3, 11, 19, 3, 11])]} {'shards': [tensor([ 4, 12, 20, 4, 12, 20, 4, 12])]} {'shards': [tensor([ 5, 13, 21, 5, 13, 21, 5, 13])]} {'shards': [tensor([ 6, 14, 22, 6, 14, 22, 6, 14])]} {'shards': [tensor([ 7, 15, 23, 7, 15, 23, 7, 15])]} {'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]} {'shards': [tensor([17, 1, 9, 17, 1, 9, 17, 1])]} {'shards': [tensor([18, 2, 10, 18, 2, 10, 18, 2])]} ``` ### Expected behavior The shards should be shuffled. ### Environment info - `datasets` version: 3.0.0 - Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.25.0 - PyArrow version: 17.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.6.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7156/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7156/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7155/comments
https://api.github.com/repos/huggingface/datasets/issues/7155/events
https://github.com/huggingface/datasets/issues/7155
2,533,641,870
I_kwDODunzps6XBE6O
7,155
Dataset viewer not working! Failure due to more than 32 splits.
{ "avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4", "events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}", "followers_url": "https://api.github.com/users/sleepingcat4/followers", "following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}", "gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sleepingcat4", "id": 81933585, "login": "sleepingcat4", "node_id": "MDQ6VXNlcjgxOTMzNTg1", "organizations_url": "https://api.github.com/users/sleepingcat4/orgs", "received_events_url": "https://api.github.com/users/sleepingcat4/received_events", "repos_url": "https://api.github.com/users/sleepingcat4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions", "type": "User", "url": "https://api.github.com/users/sleepingcat4", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I have fixed it! But I would appreciate a new feature wheere I could iterate over and see what each file looks like. " ]
2024-09-18T12:43:21
2024-09-18T13:20:03
2024-09-18T13:20:03
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Hello guys, I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoid this issue in the future. But, at the moment I need a hard fix for two of my datasets. And I don't want to mess or change anything and allow everyone in public to see the dataset and interact with it. Can you please help me? https://huggingface.co/datasets/laion/Wikipedia-X https://huggingface.co/datasets/laion/Wikipedia-X-Full
{ "avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4", "events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}", "followers_url": "https://api.github.com/users/sleepingcat4/followers", "following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}", "gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sleepingcat4", "id": 81933585, "login": "sleepingcat4", "node_id": "MDQ6VXNlcjgxOTMzNTg1", "organizations_url": "https://api.github.com/users/sleepingcat4/orgs", "received_events_url": "https://api.github.com/users/sleepingcat4/received_events", "repos_url": "https://api.github.com/users/sleepingcat4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions", "type": "User", "url": "https://api.github.com/users/sleepingcat4", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7155/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7155/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7153/comments
https://api.github.com/repos/huggingface/datasets/issues/7153/events
https://github.com/huggingface/datasets/issues/7153
2,532,788,555
I_kwDODunzps6W90lL
7,153
Support data files with .ndjson extension
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-18T05:54:45
2024-09-19T11:25:15
2024-09-19T11:25:15
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Support data files with `.ndjson` extension. ### Motivation We already support data files with `.jsonl` extension. ### Your contribution I am opening a PR.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7153/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7153/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7150/comments
https://api.github.com/repos/huggingface/datasets/issues/7150/events
https://github.com/huggingface/datasets/issues/7150
2,527,571,175
I_kwDODunzps6Wp6zn
7,150
WebDataset loader splits keys differently than WebDataset library
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-16T06:02:47
2024-09-16T15:26:35
2024-09-16T15:26:35
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames. For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`: - datasets library: `/some/path/22` and `0/1.1.png` - webdataset library: `/some/path/22.0/1`, `1.png` ```python import webdataset as wds wds.tariterators.base_plus_ext("/some/path/22.0/1.1.png") # ('/some/path/22.0/1', '1.png') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7150/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7150/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7149/comments
https://api.github.com/repos/huggingface/datasets/issues/7149/events
https://github.com/huggingface/datasets/issues/7149
2,524,497,448
I_kwDODunzps6WeMYo
7,149
Datasets Unknown Keyword Argument Error - task_templates
{ "avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4", "events_url": "https://api.github.com/users/varungupta31/events{/privacy}", "followers_url": "https://api.github.com/users/varungupta31/followers", "following_url": "https://api.github.com/users/varungupta31/following{/other_user}", "gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/varungupta31", "id": 51288316, "login": "varungupta31", "node_id": "MDQ6VXNlcjUxMjg4MzE2", "organizations_url": "https://api.github.com/users/varungupta31/orgs", "received_events_url": "https://api.github.com/users/varungupta31/received_events", "repos_url": "https://api.github.com/users/varungupta31/repos", "site_admin": false, "starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions", "type": "User", "url": "https://api.github.com/users/varungupta31", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co/datasets/facebook/winoground/discussions/8\r\n\r\n", "Hello @albertvillanova \r\n\r\nI got the same error while loading this dataset: https://huggingface.co/datasets/alaleye/aloresb...\r\n\r\nHow can I fix it ? \r\nThanks", "I am getting the same error on the below code, any fix to this ?\n\n```\nfrom datasets import load_dataset\n\nminds = load_dataset(\"PolyAI/minds14\", name=\"en-AU\", split=\"train\")\nminds\n```" ]
2024-09-13T10:30:57
2025-03-06T07:11:55
2024-09-13T14:10:48
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Issue ```python from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) ``` Gives error ``` TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates' ``` A simple downgrade to lower `datasets v 2.21.0` solves it. ### Steps to reproduce the bug 1. `pip install datsets` 2. ```python from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) ``` ### Expected behavior Should load the dataset correctly. ### Environment info - Datasets version `3.0.0` - `transformers` version: 4.45.0.dev0 - Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35 - Python version: 3.12.4 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: 0.35.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.4.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7149/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7149/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7148/comments
https://api.github.com/repos/huggingface/datasets/issues/7148/events
https://github.com/huggingface/datasets/issues/7148
2,523,833,413
I_kwDODunzps6WbqRF
7,148
Bug: Error when downloading mteb/mtop_domain
{ "avatar_url": "https://avatars.githubusercontent.com/u/77958037?v=4", "events_url": "https://api.github.com/users/ZiyiXia/events{/privacy}", "followers_url": "https://api.github.com/users/ZiyiXia/followers", "following_url": "https://api.github.com/users/ZiyiXia/following{/other_user}", "gists_url": "https://api.github.com/users/ZiyiXia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZiyiXia", "id": 77958037, "login": "ZiyiXia", "node_id": "MDQ6VXNlcjc3OTU4MDM3", "organizations_url": "https://api.github.com/users/ZiyiXia/orgs", "received_events_url": "https://api.github.com/users/ZiyiXia/received_events", "repos_url": "https://api.github.com/users/ZiyiXia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZiyiXia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZiyiXia/subscriptions", "type": "User", "url": "https://api.github.com/users/ZiyiXia", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```", "Seems the error is still there", "I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: data = load_dataset(\"mteb/mtop_domain\", \"en\")\r\n\r\nIn [3]: data\r\nOut[3]: DatasetDict({\r\n train: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 15667\r\n })\r\n validation: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 2235\r\n })\r\n test: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 4386\r\n })\r\n})\r\n```", "Just solved this by reinstall Huggingface Hub and datasets. Thanks for your help!" ]
2024-09-13T04:09:39
2024-09-14T15:11:35
2024-09-14T15:11:35
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug When downloading the dataset "mteb/mtop_domain", ran into the following error: ``` Traceback (most recent call last): File "/share/project/xzy/test/test_download.py", line 3, in <module> data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True) File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2606, in load_dataset builder_instance = load_dataset_builder( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2277, in load_dataset_builder dataset_module = dataset_module_factory( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1923, in dataset_module_factory raise e1 from None File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1896, in dataset_module_factory ).get_module() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1507, in get_module local_path = self.download_loading_script() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1467, in download_loading_script return cached_path(file_path, download_config=download_config) File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 211, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 689, in get_from_cache fsspec_get( File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 395, in fsspec_get fs.get_file(path, temp_file.name, callback=callback) File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 648, in get_file http_get( File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 578, in http_get raise EnvironmentError( OSError: Consistency check failed: file should be of size 2191 but has size 2190 ((…)ets/mteb/mtop_domain@main/mtop_domain.py). We are sorry for the inconvenience. Please retry with `force_download=True`. If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub. ``` Try to download through HF datasets directly but got the same error as above. ```python from datasets import load_dataset data = load_dataset("mteb/mtop_domain", "en") ``` ### Steps to reproduce the bug ```python from datasets import load_dataset data = load_dataset("mteb/mtop_domain", "en", force_download=True) ``` With and without `force_download=True` both ran into the same error. ### Expected behavior Should download the dataset successfully. ### Environment info - datasets version: 2.21.0 - huggingface-hub version: 0.24.6
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7148/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7148/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7147/comments
https://api.github.com/repos/huggingface/datasets/issues/7147/events
https://github.com/huggingface/datasets/issues/7147
2,523,129,465
I_kwDODunzps6WY-Z5
7,147
IterableDataset strange deadlock
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Yes `interleave_datasets` seems to have an issue with shuffling, could you open a new issue on this ?\r\n\r\nThen regarding the deadlock, it has to do with interleave_dataset with probabilities=[1, 0] with workers that may contain an empty dataset in first position (it can be empty since you distribute 1024 shard to 8 workers, so some workers may not have an example that satisfies your condition `if shard < 25`). It creates an infinite loop, trying to get samples from empty datasets with probability 1.", "Opened https://github.com/huggingface/datasets/issues/7156\r\n\r\nCan the deadlock be fixed somehow? The point of IterableDataset is so we don't need to preload the entire dataset, which loses some meaning if we need to see how many examples are in the dataset in order to set shards correctly.", "~~And it is kinda strange that `Commenting out the final shuffle avoids the issue` since if the infinite loop is inside interleave_datasets you'd expect that to happen regardless of the additional shuffle call?~~\r\n\r\nEdit: oh I guess without the shuffle it's guaranteed every worker gets something, but the shuffle makes it so some workers could have nothing\r\n\r\n~~Edit2: maybe the shuffle can be changed so initially it gives one example to each worker, and only starts the random shuffle after that~~ wait it's not about the workers not getting any shards, it's about a worker getting shards but all of the shards it gets are empty shards\r\n\r\nEdit3: If it's trying to get samples from empty datasets, it should be getting back a StopIteration -- and \"all_exhausted\" should mean it eventually discovers all its datasets are empty, and then it should just raise a StopIteration itself. So it seems like there is a reasonable behavior result for this?", "well the second dataset passed to interleave_datasets is never exhausted, since it's never sampled. But we could also state that the stream of examples from the second dataset is empty if it has probability 0, so I opened https://github.com/huggingface/datasets/pull/7157 to fix the infinite loop issue by ignoring datasets with probability 0, let me know what you think !", "Thanks for taking a look!\r\n\r\nI think you're right that this is ultimately an issue that the user opts into by specifying a dataset with probability 0, because the user is basically saying \"I want to force this `interleave_datasets` call to run forever\" and yet one of the workers can end up having only empty shards to mix...\r\n\r\nThat said it's probably not a good idea to randomly change the behavior of `interleave_datasets` with probability 0, I can't be the only one that uses it to repeat many different datasets (since there is no `datasets.repeat()` function). https://xkcd.com/1172/\r\n\r\nI think just the knowledge that filtering out probability 0 datasets fixes the deadlock is good enough for me. I can filter it out on my side and add a restart loop around the dataloader instead.\r\n\r\nThanks again for investigating.", "Ok I see ! We can also add .repeat() as well" ]
2024-09-12T18:59:33
2024-09-23T09:32:27
2024-09-21T17:37:34
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug ``` import datasets import torch.utils.data num_shards = 1024 def gen(shards): for shard in shards: if shard < 25: yield {"shard": shard} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={"shards": list(range(num_shards))}, ) dataset = dataset.shuffle(buffer_size=1) dataset = datasets.interleave_datasets( [dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted" ) dataset = dataset.shuffle(buffer_size=1) dataloader = torch.utils.data.DataLoader( dataset, batch_size=8, num_workers=8, ) for i, batch in enumerate(dataloader): print(batch) if i >= 10: break print() if __name__ == "__main__": for _ in range(100): main() ``` ### Steps to reproduce the bug Running the script above, at some point it will freeze. - Changing `num_shards` from 1024 to 25 avoids the issue - Commenting out the final shuffle avoids the issue - Commenting out the interleave_datasets call avoids the issue As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets. ### Expected behavior The script should not freeze. ### Environment info - `datasets` version: 3.0.0 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.5 - `huggingface_hub` version: 0.24.7 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1 I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro.
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7147/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7147/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7142/comments
https://api.github.com/repos/huggingface/datasets/issues/7142/events
https://github.com/huggingface/datasets/issues/7142
2,512,244,938
I_kwDODunzps6VvdDK
7,142
Specifying datatype when adding a column to a dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4", "events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}", "followers_url": "https://api.github.com/users/varadhbhatnagar/followers", "following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}", "gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/varadhbhatnagar", "id": 20443618, "login": "varadhbhatnagar", "node_id": "MDQ6VXNlcjIwNDQzNjE4", "organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs", "received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events", "repos_url": "https://api.github.com/users/varadhbhatnagar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions", "type": "User", "url": "https://api.github.com/users/varadhbhatnagar", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "#self-assign" ]
2024-09-08T07:34:24
2024-09-17T03:46:32
2024-09-17T03:46:32
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request There should be a way to specify the datatype of a column in `datasets.add_column()`. ### Motivation To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function. IMO this functionality should be natively supported. /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fadd-column-with-a-particular-type-in-datasets%2F95674 ### Your contribution I can submit a PR for this.
{ "avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4", "events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}", "followers_url": "https://api.github.com/users/varadhbhatnagar/followers", "following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}", "gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/varadhbhatnagar", "id": 20443618, "login": "varadhbhatnagar", "node_id": "MDQ6VXNlcjIwNDQzNjE4", "organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs", "received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events", "repos_url": "https://api.github.com/users/varadhbhatnagar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions", "type": "User", "url": "https://api.github.com/users/varadhbhatnagar", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7142/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7142/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7141/comments
https://api.github.com/repos/huggingface/datasets/issues/7141/events
https://github.com/huggingface/datasets/issues/7141
2,510,797,653
I_kwDODunzps6Vp7tV
7,141
Older datasets throwing safety errors with 2.21.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1050316?v=4", "events_url": "https://api.github.com/users/alvations/events{/privacy}", "followers_url": "https://api.github.com/users/alvations/followers", "following_url": "https://api.github.com/users/alvations/following{/other_user}", "gists_url": "https://api.github.com/users/alvations/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvations", "id": 1050316, "login": "alvations", "node_id": "MDQ6VXNlcjEwNTAzMTY=", "organizations_url": "https://api.github.com/users/alvations/orgs", "received_events_url": "https://api.github.com/users/alvations/received_events", "repos_url": "https://api.github.com/users/alvations/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvations/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvations/subscriptions", "type": "User", "url": "https://api.github.com/users/alvations", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I am also getting this error with this dataset: https://huggingface.co/datasets/google/IFEval", "Me too, didn't have this issue few hours ago.", "same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?\r\n", "Not a good idea, but commenting out the whole security block at `/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py` is a temporary workaround:\r\n\r\n```\r\n #security = kwargs.pop(\"security\", None)\r\n #if security is not None:\r\n # security = BlobSecurityInfo(\r\n # safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\n # )\r\n #self.security = security\r\n```\r\n", "Uploading a dataset to Huggingface also results in the following error in the Dataset Preview:\r\n```\r\nThe full dataset viewer is not available (click to read why). Only showing a preview of the rows.\r\n'safe'\r\nError code: UnexpectedError\r\nNeed help to make the dataset viewer work? Make sure to review [how to configure the dataset viewer](link1), and [open a discussion](link2) for direct support.\r\n```\r\nI used jsonl format for the dataset in this case. Same exact dataset worked previously.", "Same issue here. Even reverting to older version of `datasets` (e.g., `2.19.0`) results in same error:\r\n\r\n```python\r\n>>> datasets.load_dataset('allenai/ai2_arc', 'ARC-Easy')\r\n\r\nFile \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3048, in <listcomp>\r\n RepoFile(**path_info) if path_info[\"type\"] == \"file\" else RepoFolder(**path_info)\r\n File \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 534, in __init__\r\n safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\nKeyError: 'safe'\r\n```", "i just had this issue a few minutes ago, crawled the internet and found nothing. came here to open an issue and found this. it is really frustrating. anyone found a fix?", "hi, me and my team have the same problem", "Yeah, this just suddenly appeared without client-side code changes, within the last hours.\r\n\r\nHere's a patch to fix the issue temporarily:\r\n```python\r\nimport huggingface_hub\r\ndef patched_repofolder_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.tree_id = kwargs.pop(\"oid\")\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n\r\n\r\ndef patched_repo_file_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.size = kwargs.pop(\"size\")\r\n self.blob_id = kwargs.pop(\"oid\")\r\n lfs = kwargs.pop(\"lfs\", None)\r\n if lfs is not None:\r\n lfs = huggingface_hub.hf_api.BlobLfsInfo(size=lfs[\"size\"], sha256=lfs[\"oid\"], pointer_size=lfs[\"pointerSize\"])\r\n self.lfs = lfs\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n self.security = None\r\n\r\n # backwards compatibility\r\n self.rfilename = self.path\r\n self.lastCommit = self.last_commit\r\n\r\n\r\nhuggingface_hub.hf_api.RepoFile.__init__ = patched_repo_file_init\r\nhuggingface_hub.hf_api.RepoFolder.__init__ = patched_repofolder_init\r\n```\r\n", "Also discussed here:\r\n/static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fi-keep-getting-keyerror-safe-when-loading-my-datasets%2F105669%2F1", "i'm thinking this should be a server issue, i mean no client code was changed on my end. so weird!", "As far as I can tell, this seems to be happening with **all** datasets that use RepoFolder (probably represents most datasets on huggingface, right?)", "> Here is a temporary fix for the problem: /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fi-keep-getting-keyerror-safe-when-loading-my-datasets%2F105669%2F12%3Fu%3Dmlscientist%5Cr%5Cn%5Cr%5Cnthis doesn't seem to work!", "In case you are using Colab or similar, remember to restart your session after modyfing the hf_api.py file", "No need to modify the file directly, just monkey-patch.\r\n\r\nI'm now more sure that the error appears because the backend expects the api code to look like it does on `main`. If `RepoFile` and `RepoFolder` look about like they look on main, they work again.\r\n\r\nIf not fixed like above, a secondary error that will appear is \r\n```\r\n return self.info(path, expand_info=False)[\"type\"] == \"directory\"\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n \"tree_id\": path_info.tree_id,\r\n ^^^^^^^^^^^^^^^^^\r\nAttributeError: 'RepoFolder' object has no attribute 'tree_id'\r\n```\r\n", "We've reverted the deployment, please let us know if the issue still persists!", "thanks @muellerzr!" ]
2024-09-06T16:26:30
2024-09-06T21:14:14
2024-09-06T19:09:29
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug The dataset loading was throwing some safety errors for this popular dataset `wmt14`. [in]: ``` import datasets # train_data = datasets.load_dataset("wmt14", "de-en", split="train") train_data = datasets.load_dataset("wmt14", "de-en", split="train") val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]") ``` [out]: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-9-445f0ecc4817>](https://localhost:8080/#) in <cell line: 4>() 2 3 # train_data = datasets.load_dataset("wmt14", "de-en", split="train") ----> 4 train_data = datasets.load_dataset("wmt14", "de-en", split="train") 5 val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]") 12 frames [/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in __init__(self, **kwargs) 636 if security is not None: 637 security = BlobSecurityInfo( --> 638 safe=security["safe"], av_scan=security["avScan"], pickle_import_scan=security["pickleImportScan"] 639 ) 640 self.security = security KeyError: 'safe' ``` ### Steps to reproduce the bug See above. ### Expected behavior Dataset properly loaded. ### Environment info version: 2.21.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/muellerzr", "id": 7831895, "login": "muellerzr", "node_id": "MDQ6VXNlcjc4MzE4OTU=", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "repos_url": "https://api.github.com/users/muellerzr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "type": "User", "url": "https://api.github.com/users/muellerzr", "user_view_type": "public" }
{ "+1": 26, "-1": 0, "confused": 3, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 29, "url": "https://api.github.com/repos/huggingface/datasets/issues/7141/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7141/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7139/comments
https://api.github.com/repos/huggingface/datasets/issues/7139/events
https://github.com/huggingface/datasets/issues/7139
2,508,078,858
I_kwDODunzps6Vfj8K
7,139
Use load_dataset to load imagenet-1K But find a empty dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/105094708?v=4", "events_url": "https://api.github.com/users/fscdc/events{/privacy}", "followers_url": "https://api.github.com/users/fscdc/followers", "following_url": "https://api.github.com/users/fscdc/following{/other_user}", "gists_url": "https://api.github.com/users/fscdc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fscdc", "id": 105094708, "login": "fscdc", "node_id": "U_kgDOBkOeNA", "organizations_url": "https://api.github.com/users/fscdc/orgs", "received_events_url": "https://api.github.com/users/fscdc/received_events", "repos_url": "https://api.github.com/users/fscdc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fscdc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fscdc/subscriptions", "type": "User", "url": "https://api.github.com/users/fscdc", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Imagenet-1k is a gated dataset which means you’ll have to agree to share your contact info to access it. Have you tried this yet? Once you have, you can sign in with your user token (you can find this in your Hugging Face account settings) when prompted by running.\r\n\r\n```\r\nhuggingface-cli login\r\ntrain_set = load_dataset('imagenet-1k', split='train', use_auth_token=True)\r\n``` ", "Thanks a lot! It helps me" ]
2024-09-05T15:12:22
2024-10-09T04:02:41
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug ```python def get_dataset(data_path, train_folder="train", val_folder="val"): traindir = os.path.join(data_path, train_folder) valdir = os.path.join(data_path, val_folder) def transform_val_examples(examples): transform = Compose([ Resize(256), CenterCrop(224), ToTensor(), ]) examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]] return examples def transform_train_examples(examples): transform = Compose([ RandomResizedCrop(224), RandomHorizontalFlip(), ToTensor(), ]) examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]] return examples # @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset) # train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4) # test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4) train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True) test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True) print(train_set["label"]) train_set.set_transform(transform_train_examples) test_set.set_transform(transform_val_examples) return train_set, test_set ``` above the code, but output of the print is a list of None: <img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb"> ### Steps to reproduce the bug 1. just ran the code 2. see the print ### Expected behavior I do not know how to fix this, can anyone provide help or something? It is hurry for me ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.6 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7139/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7139/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7138/comments
https://api.github.com/repos/huggingface/datasets/issues/7138/events
https://github.com/huggingface/datasets/issues/7138
2,507,738,308
I_kwDODunzps6VeQzE
7,138
Cache only changed columns?
{ "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Modexus", "id": 37351874, "login": "Modexus", "node_id": "MDQ6VXNlcjM3MzUxODc0", "organizations_url": "https://api.github.com/users/Modexus/orgs", "received_events_url": "https://api.github.com/users/Modexus/received_events", "repos_url": "https://api.github.com/users/Modexus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "type": "User", "url": "https://api.github.com/users/Modexus", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "so I guess a workaround to this is to simply remove all columns except the ones to cache and then add them back with `concatenate_datasets(..., axis=1)`.", "yes this is the right workaround. We're keeping the cache like this to make it easier for people to delete intermediate cache files" ]
2024-09-05T12:56:47
2024-09-20T13:27:20
null
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Cache only the actual changes to the dataset i.e. changed columns. ### Motivation I realized that caching actually saves the complete dataset again. This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again. ### Your contribution Is this even viable in the current architecture of the package? I quickly looked into it and it seems it would require significant changes. I would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7138/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7138/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7137/comments
https://api.github.com/repos/huggingface/datasets/issues/7137/events
https://github.com/huggingface/datasets/issues/7137
2,506,851,048
I_kwDODunzps6Va4Lo
7,137
[BUG] dataset_info sequence unexpected behavior in README.md YAML
{ "avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4", "events_url": "https://api.github.com/users/ain-soph/events{/privacy}", "followers_url": "https://api.github.com/users/ain-soph/followers", "following_url": "https://api.github.com/users/ain-soph/following{/other_user}", "gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ain-soph", "id": 13214530, "login": "ain-soph", "node_id": "MDQ6VXNlcjEzMjE0NTMw", "organizations_url": "https://api.github.com/users/ain-soph/orgs", "received_events_url": "https://api.github.com/users/ain-soph/received_events", "repos_url": "https://api.github.com/users/ain-soph/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions", "type": "User", "url": "https://api.github.com/users/ain-soph", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "The non-sequence case works well (`dict[str, str]` instead of `list[dict[str, str]]`), which makes me believe it shall be a bug for `sequence` and my proposed behavior shall be expected.\r\n```\r\ndataset_info:\r\n- config_name: default\r\n features:\r\n - name: answers\r\n dtype:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: string\r\n\r\n\r\n# data\r\n{\"answers\": {\"text\": \"ADDRESS\", \"label\": \"abc\"}}\r\n```" ]
2024-09-05T06:06:06
2024-09-09T15:55:50
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly. My data looks like ``` {"answers":[{"text": "ADDRESS", "label": "abc"}]} ``` My `dataset_info` in README.md is: ``` dataset_info: - config_name: default features: - name: answers sequence: - name: text dtype: string - name: label dtype: string ``` **Error log**: ``` pyarrow.lib.ArrowNotImplementedError: Unsupported cast from list<item: struct<text: string, label: string>> to struct using function cast_struct ``` ## Potential Reason After some analysis, it turns out that my yaml config is requiring `dict[str, list[str]]` instead of `list[dict[str, str]]`. It would work if I change my data to ``` {"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}} ``` These following 2 different `dataset_info` are actually equivalent. ``` dataset_info: - config_name: default features: - name: answers dtype: - name: text sequence: string - name: label sequence: string dataset_info: - config_name: default features: - name: answers sequence: - name: text dtype: string - name: label dtype: string ``` ### Steps to reproduce the bug ``` # README.md --- dataset_info: - config_name: default features: - name: answers sequence: - name: text dtype: string - name: label dtype: string configs: - config_name: default default: true data_files: - split: train path: - "test.jsonl" --- # test.jsonl # expected but not working {"answers":[{"text": "ADDRESS", "label": "abc"}]} # unexpected but working {"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}} ``` ### Expected behavior ``` dataset_info: - config_name: default features: - name: answers sequence: - name: text dtype: string - name: label dtype: string ``` Should work on following data format: ``` {"answers":[{"text":"ADDRESS", "label": "abc"}]} ``` ### Environment info - `datasets` version: 2.21.0 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.4 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7137/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7137/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7135/comments
https://api.github.com/repos/huggingface/datasets/issues/7135/events
https://github.com/huggingface/datasets/issues/7135
2,503,318,328
I_kwDODunzps6VNZs4
7,135
Bug: Type Mismatch in Dataset Mapping
{ "avatar_url": "https://avatars.githubusercontent.com/u/45327989?v=4", "events_url": "https://api.github.com/users/marko1616/events{/privacy}", "followers_url": "https://api.github.com/users/marko1616/followers", "following_url": "https://api.github.com/users/marko1616/following{/other_user}", "gists_url": "https://api.github.com/users/marko1616/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marko1616", "id": 45327989, "login": "marko1616", "node_id": "MDQ6VXNlcjQ1MzI3OTg5", "organizations_url": "https://api.github.com/users/marko1616/orgs", "received_events_url": "https://api.github.com/users/marko1616/received_events", "repos_url": "https://api.github.com/users/marko1616/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marko1616/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marko1616/subscriptions", "type": "User", "url": "https://api.github.com/users/marko1616", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "By the way, following code is working. This show the inconsistentcy.\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# Original data\r\ndata = {\r\n 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],\r\n 'label': [0, 1, 0, 1, 1, 0]\r\n}\r\n\r\n# Creating a Dataset object\r\ndataset = Dataset.from_dict(data)\r\n\r\n# Mapping function to convert label to string\r\ndef add_one(example):\r\n example['label'] += 1\r\n return example\r\n\r\n# Applying the mapping function\r\ndataset = dataset.map(add_one)\r\n\r\n# Iterating over the dataset to show results\r\nfor item in dataset:\r\n print(item)\r\n print(type(item['label']))\r\n```", "Hello, thanks for submitting an issue.\r\n\r\nFWIU, the issue is that `datasets` tries to limit casting [ref](https://github.com/huggingface/datasets/blob/ca58154bba185c1916ca5eea4e33b27258642044/src/datasets/arrow_writer.py#L526) and as such will try to convert your strings back to int to preserve the `Features`. \r\n\r\nA quick solution would be to use `dataset.cast` or to supply `features` when calling `dataset.map`.\r\n\r\n\r\n```python\r\n# using Dataset.cast\r\ndataset = dataset.cast_column('label', Value('string'))\r\n\r\n# Alternative, supply features\r\ndataset = dataset.map(add_one, features=Features({**dataset.features, 'label': Value('string')}))\r\n```", "LGTM! Thanks for the review.\r\n\r\nJust to clarify, is this intended behavior, or is it something that might be addressed in a future update?\r\nI'll leave this issue open until it's fixed if this is not the intended behavior." ]
2024-09-03T16:37:01
2024-09-05T14:09:05
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
# Issue: Type Mismatch in Dataset Mapping ## Description There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of a string. ## Reproduction Code Below is a Python script that demonstrates the problem: ```python from datasets import Dataset # Original data data = { 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'], 'label': [0, 1, 0, 1, 1, 0] } # Creating a Dataset object dataset = Dataset.from_dict(data) # Mapping function to convert label to string def add_one(example): example['label'] = str(example['label']) return example # Applying the mapping function dataset = dataset.map(add_one) # Iterating over the dataset to show results for item in dataset: print(item) print(type(item['label'])) ``` ## Expected Output After applying the mapping function, the expected output should have the `label` field as strings: ```plaintext {'text': 'Hello', 'label': '0'} <class 'str'> {'text': 'world', 'label': '1'} <class 'str'> {'text': 'this', 'label': '0'} <class 'str'> {'text': 'is', 'label': '1'} <class 'str'> {'text': 'a', 'label': '1'} <class 'str'> {'text': 'test', 'label': '0'} <class 'str'> ``` ## Actual Output The actual output still shows the `label` field values as integers: ```plaintext {'text': 'Hello', 'label': 0} <class 'int'> {'text': 'world', 'label': 1} <class 'int'> {'text': 'this', 'label': 0} <class 'int'> {'text': 'is', 'label': 1} <class 'int'> {'text': 'a', 'label': 1} <class 'int'> {'text': 'test', 'label': 0} <class 'int'> ``` ## Why necessary In the case of Image process we often need to convert PIL to tensor with same column name. Thank for every dev who review this issue. 🤗
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7135/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7135/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7134/comments
https://api.github.com/repos/huggingface/datasets/issues/7134/events
https://github.com/huggingface/datasets/issues/7134
2,499,484,041
I_kwDODunzps6U-xmJ
7,134
Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown
{ "avatar_url": "https://avatars.githubusercontent.com/u/46371349?v=4", "events_url": "https://api.github.com/users/navidmafi/events{/privacy}", "followers_url": "https://api.github.com/users/navidmafi/followers", "following_url": "https://api.github.com/users/navidmafi/following{/other_user}", "gists_url": "https://api.github.com/users/navidmafi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/navidmafi", "id": 46371349, "login": "navidmafi", "node_id": "MDQ6VXNlcjQ2MzcxMzQ5", "organizations_url": "https://api.github.com/users/navidmafi/orgs", "received_events_url": "https://api.github.com/users/navidmafi/received_events", "repos_url": "https://api.github.com/users/navidmafi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/navidmafi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/navidmafi/subscriptions", "type": "User", "url": "https://api.github.com/users/navidmafi", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-09-01T13:55:41
2024-09-02T10:34:53
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method. I can convert an image from a (H,W,3) shape to a grayscale (H,W) image and I have no problems with this. But when attempting to return a (H,W,1) shaped matrix from a map function, it never completes and sometimes even results in an OOM from the OS. I've used various methods to expand a (H,W) shaped array to a (H,W,1) array. But they all resulted in extremely long map operations consuming a lot of CPU and RAM. ### Steps to reproduce the bug Below is a minimal example using two methods to get the desired output. Both of which don't work ```py import tensorflow as tf import datasets import numpy as np ds = datasets.load_dataset("project-sloth/captcha-images") to_gray_pillow = lambda sample: {'image': np.expand_dims(sample['image'].convert("L"), axis=-1)} ds_gray = ds.map(to_gray_pillow) # Alternatively ds = datasets.load_dataset("project-sloth/captcha-images").with_format("tensorflow") to_gray_tf = lambda sample: {'image': tf.expand_dims(tf.image.rgb_to_grayscale(sample['image']), axis=-1)} ds_gray = ds.map(to_gray_tf) ``` ### Expected behavior I expect the map operation to complete and return a new dataset containing grayscale images in a (H,W,1) shape. ### Environment info datasets 2.21.0 python tested with both 3.11 and 3.12 host os : linux
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7134/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7134/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7129/comments
https://api.github.com/repos/huggingface/datasets/issues/7129/events
https://github.com/huggingface/datasets/issues/7129
2,491,942,650
I_kwDODunzps6UiAb6
7,129
Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output
{ "avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4", "events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}", "followers_url": "https://api.github.com/users/sergiopaniego/followers", "following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}", "gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sergiopaniego", "id": 17179696, "login": "sergiopaniego", "node_id": "MDQ6VXNlcjE3MTc5Njk2", "organizations_url": "https://api.github.com/users/sergiopaniego/orgs", "received_events_url": "https://api.github.com/users/sergiopaniego/received_events", "repos_url": "https://api.github.com/users/sergiopaniego/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions", "type": "User", "url": "https://api.github.com/users/sergiopaniego", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2024-08-28T12:27:48
2024-12-06T11:32:02
2024-12-06T11:32:02
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code: ```` from datasets import Features features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])}) features ```` which expects to output (as stated in the documentation): ```` {'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)} ```` but it generates the following ```` {'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)} ```` If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored: https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975 I would like to work on this issue if this is something needed 😄
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7129/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7129/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7128/comments
https://api.github.com/repos/huggingface/datasets/issues/7128/events
https://github.com/huggingface/datasets/issues/7128
2,490,274,775
I_kwDODunzps6UbpPX
7,128
Filter Large Dataset Entry by Entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/36057290?v=4", "events_url": "https://api.github.com/users/QiyaoWei/events{/privacy}", "followers_url": "https://api.github.com/users/QiyaoWei/followers", "following_url": "https://api.github.com/users/QiyaoWei/following{/other_user}", "gists_url": "https://api.github.com/users/QiyaoWei/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/QiyaoWei", "id": 36057290, "login": "QiyaoWei", "node_id": "MDQ6VXNlcjM2MDU3Mjkw", "organizations_url": "https://api.github.com/users/QiyaoWei/orgs", "received_events_url": "https://api.github.com/users/QiyaoWei/received_events", "repos_url": "https://api.github.com/users/QiyaoWei/repos", "site_admin": false, "starred_url": "https://api.github.com/users/QiyaoWei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/QiyaoWei/subscriptions", "type": "User", "url": "https://api.github.com/users/QiyaoWei", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! you can do\r\n\r\n```python\r\nfiltered_dataset = dataset.filter(filter_function)\r\n```\r\n\r\non a subset:\r\n\r\n```python\r\nfiltered_subset = dataset.select(range(10_000)).filter(filter_function)\r\n```\r\n", "Jumping on this as it seems relevant - when I use the `filter` method, it often results in an OOM (or at least unacceptably high memory usage).\r\n\r\nFor example in the [this notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_), we load an object detection dataset from HF and imagine I want to filter such that I only have images which contain a single annotation class. Each row has a JSON field that contains MS-COCO annotations for the image, so we could load that field and filter on it.\r\n\r\nThe test dataset is only about 440 images, probably less than 1GB, but running the following filter crashes the VM (over 12 GB RAM):\r\n\r\n```python\r\nimport json\r\ndef filter_single_class(example, target_class_id):\r\n \"\"\"Filters examples based on whether they contain annotations from a single class.\r\n\r\n Args:\r\n example: A dictionary representing a single example from the dataset.\r\n target_class_id: The target class ID to filter for.\r\n\r\n Returns:\r\n True if the example contains only annotations from the target class, False otherwise.\r\n \"\"\"\r\n if not example['coco_annotations']:\r\n return False\r\n\r\n annotation_category_ids = set([annotation['category_id'] for annotation in json.loads(example['coco_annotations'])])\r\n\r\n return len(annotation_category_ids) == 1 and target_class_id in annotation_category_ids\r\n\r\ntarget_class_id = 1 \r\nfiltered_dataset = dataset['test'].filter(lambda example: filter_single_class(example, target_class_id))\r\n```\r\n\r\n<img width=\"255\" alt=\"image\" src=\"https://github.com/user-attachments/assets/be475f15-5b6b-4df2-b5b5-a1f60ae2b05c\">\r\n\r\nIterating over the dataset works fine:\r\n\r\n```python\r\nfiltered_dataset = []\r\nfor example in dataset['test']:\r\n if filter_single_class(example, target_class_id):\r\n filtered_dataset.append(example)\r\n```\r\n\r\n<img width=\"129\" alt=\"image\" src=\"https://github.com/user-attachments/assets/34fa5612-0394-4c46-9f34-e94650f05d65\">\r\n\r\nIt would be great if there was guidance in the documentation on how to use filters efficiently, or if this is some performance bug that could be addressed. At the very least I would expect a filter operation to use at most 2x the footprint of the database plus some overhead for the lambda (i.e. worst case would be a duplicate copy with all entries retained). Even if the operation is parallelised, each thread/worker should only take a subset of the dataset - so I'm not sure where this ballooning in memory usage comes from.\r\n\r\nFrom some other comments there seems to be a workaround with `writer_batch_size` or caching to file, but in the [docs](https://huggingface.co/docs/datasets/v3.0.0/en/package_reference/main_classes#datasets.Dataset.filter) at least, `keep_in_memory` defaults to `False`.", "You can try passing input_columns=[\"coco_annotations\"] to only load this column instead of all the columns. In that case your function should take coco_annotations as input instead of example", "If your filter_function is large and computationally intensive, consider using multi-processing or multi-threading with concurrent.futures to filter the dataset. This approach allows you to process multiple tables concurrently, reducing overall processing time, especially for CPU-bound tasks. Use ThreadPoolExecutor for I/O-bound operations and ProcessPoolExecutor for CPU-bound operations.\r\n" ]
2024-08-27T20:31:09
2024-10-07T23:37:44
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process. Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like: ``` dataset = load_dataset( "really-large-dataset", streaming=True ) # And let's say we process the dataset bit by bit because we want intermediate results dataset = islice(dataset, 10000) # Define a function to filter the data def filter_function(table): if some_condition: return True else: return False # Use the filter function on your dataset filtered_dataset = (ex for ex in dataset if filter_function(ex)) ``` And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions! ### Motivation See description above ### Your contribution Happy to make PR if this is a new feature
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7128/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7128/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7127/comments
https://api.github.com/repos/huggingface/datasets/issues/7127/events
https://github.com/huggingface/datasets/issues/7127
2,486,524,966
I_kwDODunzps6UNVwm
7,127
Caching shuffles by np.random.Generator results in unintiutive behavior
{ "avatar_url": "https://avatars.githubusercontent.com/u/11832922?v=4", "events_url": "https://api.github.com/users/el-hult/events{/privacy}", "followers_url": "https://api.github.com/users/el-hult/followers", "following_url": "https://api.github.com/users/el-hult/following{/other_user}", "gists_url": "https://api.github.com/users/el-hult/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/el-hult", "id": 11832922, "login": "el-hult", "node_id": "MDQ6VXNlcjExODMyOTIy", "organizations_url": "https://api.github.com/users/el-hult/orgs", "received_events_url": "https://api.github.com/users/el-hult/received_events", "repos_url": "https://api.github.com/users/el-hult/repos", "site_admin": false, "starred_url": "https://api.github.com/users/el-hult/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/el-hult/subscriptions", "type": "User", "url": "https://api.github.com/users/el-hult", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/arrow_dataset.py#L4306-L4316\r\n\r\nbecause the shuffle happens after checking the cache, the rng state won't advance if the cache is used. This is VERY confusing. Also not documented.\r\n\r\nMy proposal is that you remove the API for using a Generator, and only keep the seed-based API since that is functional and cache-compatible." ]
2024-08-26T10:29:48
2025-03-10T17:12:57
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles. Load dataset from disk again. Shuffle and Iterate. See same result as before. Shuffle and iterate, and this time it does not have the same shuffling as ion previous run. The motivation is I have a deep learning loop with ``` for epoch in range(10): for batch in dataset.shuffle(generator=generator).iter(batch_size=32): .... # do stuff ``` where I want a new shuffling at every epoch. Instead I get the same shuffling. ### Steps to reproduce the bug Run the code below two times. ```python import datasets import numpy as np generator = np.random.default_rng(0) ds = datasets.Dataset.from_dict(mapping={"X":range(1000)}) ds.save_to_disk("tmp") print("First loop: ", end="") for _ in range(10): print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ") print("") print("Second loop: ", end="") ds = datasets.Dataset.load_from_disk("tmp") for _ in range(10): print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ") print("") ``` The output is: ``` $ python main.py Saving the dataset (1/1 shards): 100%|███████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 495019.95 examples/s] First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334, Second loop: 741, 847, 944, 795, 483, 842, 717, 865, 231, 840, $ python main.py Saving the dataset (1/1 shards): 100%|████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 22243.40 examples/s] First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334, Second loop: 741, 741, 741, 741, 741, 741, 741, 741, 741, 741, ``` The second loop, on the second run, only spits out "741, 741, 741...." which is *not* the desired output ### Expected behavior I want the dataset to shuffle at every epoch since I provide it with a generator for shuffling. ### Environment info Datasets version 2.21.0 Ubuntu linux.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7127/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7127/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7123/comments
https://api.github.com/repos/huggingface/datasets/issues/7123/events
https://github.com/huggingface/datasets/issues/7123
2,484,003,937
I_kwDODunzps6UDuRh
7,123
Make dataset viewer more flexible in displaying metadata alongside images
{ "avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4", "events_url": "https://api.github.com/users/egrace479/events{/privacy}", "followers_url": "https://api.github.com/users/egrace479/followers", "following_url": "https://api.github.com/users/egrace479/following{/other_user}", "gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/egrace479", "id": 38985481, "login": "egrace479", "node_id": "MDQ6VXNlcjM4OTg1NDgx", "organizations_url": "https://api.github.com/users/egrace479/orgs", "received_events_url": "https://api.github.com/users/egrace479/received_events", "repos_url": "https://api.github.com/users/egrace479/repos", "site_admin": false, "starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/egrace479/subscriptions", "type": "User", "url": "https://api.github.com/users/egrace479", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Note that you can already have one directory per subset just for the metadata, e.g.\r\n\r\n```\r\nconfigs:\r\n - config_name: subset0\r\n data_files:\r\n - subset0/metadata.csv\r\n - images/*.jpg\r\n - config_name: subset1\r\n data_files:\r\n - subset1/metadata.csv\r\n - images/*.jpg\r\n```\r\n\r\nEDIT: ah maybe it doesn't work because you'd have to provide relative paths from the metadata files to the images", "Yes, that's part of the issue. Also, `metadata.csv` is a very ambiguous name and we generally try to avoid using the same name for different files within a dataset, as this can quickly lead to confusion.", "I think supporting `**/*-metadata.csv` or `**/*_metadata.csv` makes sense to me. If it sounds good to you feel free to open a PR to update the patterns here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/d4422cc24a56dc7132ddc3fd6b285c5edbd60b8c/src/datasets/data_files.py#L104-L115" ]
2024-08-23T22:56:01
2024-10-17T09:13:47
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed. ### Motivation When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)). It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue). ### Your contribution I can make a suggestion for one approach to address the issue: For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?). Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work? ``` configs: - config_name: <image subset> data_files: - <image-metadata>.csv - <path/to/images>/*.jpg ``` I'd also be happy to look at whatever solution is decided upon and contribute to the ideation. Thanks for your time and consideration! The dataset viewer really is fabulous when it works :)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7123/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7123/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7122/comments
https://api.github.com/repos/huggingface/datasets/issues/7122/events
https://github.com/huggingface/datasets/issues/7122
2,482,491,258
I_kwDODunzps6T9896
7,122
[interleave_dataset] sample batches from a single source at a time
{ "avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4", "events_url": "https://api.github.com/users/memray/events{/privacy}", "followers_url": "https://api.github.com/users/memray/followers", "following_url": "https://api.github.com/users/memray/following{/other_user}", "gists_url": "https://api.github.com/users/memray/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/memray", "id": 4197249, "login": "memray", "node_id": "MDQ6VXNlcjQxOTcyNDk=", "organizations_url": "https://api.github.com/users/memray/orgs", "received_events_url": "https://api.github.com/users/memray/received_events", "repos_url": "https://api.github.com/users/memray/repos", "site_admin": false, "starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/memray/subscriptions", "type": "User", "url": "https://api.github.com/users/memray", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2024-08-23T07:21:15
2024-08-23T07:21:15
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)? ### Motivation Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality? ### Your contribution I can contribute a PR. But I wonder what the best way is to test its correctness and robustness.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7122/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7122/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7117/comments
https://api.github.com/repos/huggingface/datasets/issues/7117/events
https://github.com/huggingface/datasets/issues/7117
2,476,555,659
I_kwDODunzps6TnT2L
7,117
Audio dataset load everything in RAM and is very slow
{ "avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4", "events_url": "https://api.github.com/users/Jourdelune/events{/privacy}", "followers_url": "https://api.github.com/users/Jourdelune/followers", "following_url": "https://api.github.com/users/Jourdelune/following{/other_user}", "gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jourdelune", "id": 64205064, "login": "Jourdelune", "node_id": "MDQ6VXNlcjY0MjA1MDY0", "organizations_url": "https://api.github.com/users/Jourdelune/orgs", "received_events_url": "https://api.github.com/users/Jourdelune/received_events", "repos_url": "https://api.github.com/users/Jourdelune/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions", "type": "User", "url": "https://api.github.com/users/Jourdelune", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! I think the issue comes from the fact that you return `row` entirely, and therefore the dataset has to re-encode the audio data in `row`.\r\n\r\nCan you try this instead ?\r\n\r\n```python\r\n# map the dataset\r\ndef transcribe_audio(row):\r\n audio = row[\"audio\"] # get the audio but do nothing with it\r\n return {\"transcribed\": True}\r\n```\r\n\r\nPS: no need to iter on the dataset to trigger the `map` function on a `Dataset` - `map` runs directly when it's called (contrary to `IterableDataset` taht you can get when streaming, which are lazy)", "No, that doesn't change anything, I manage to solve this problem by setting with_indices=True in the map function and directly retrieving the audio corresponding to the index.\r\n```py\r\nfrom datasets import load_dataset\r\nimport time\r\n\r\nds = load_dataset(\"WaveGenAI/audios2\", split=\"train[:50]\")\r\n\r\n\r\n# map the dataset\r\ndef transcribe_audio(row, idx):\r\n audio = ds[idx][\"audio\"] # get the audio but do nothing with it\r\n row[\"transcribed\"] = True\r\n return row\r\n\r\n\r\ntime1 = time.time()\r\nds = ds.map(\r\n transcribe_audio, with_indices=True\r\n) # set low writer_batch_size to avoid memory issues\r\n\r\nfor row in ds:\r\n pass # do nothing, just iterate to trigger the map function\r\n\r\nprint(f\"Time taken: {time.time() - time1:.2f} seconds\")\r\n```", "Hmm maybe accessing `row[\"audio\"]` makes `map()` reencode what's inside `row[\"audio\"]` in case there are in-place modifications" ]
2024-08-20T21:18:12
2024-08-26T13:11:55
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes. To fix this issue, I'm using writer_batch_size that I set to 10, but in this case, the mapping of the dataset is extremely slow. To illustrate this, on 50 examples, with `writer_batch_size` set to 10, it takes 123.24 seconds to process the dataset, but without `writer_batch_size` set to 10, it takes about ten seconds to process the dataset, but then the process remains blocked (I assume that it is writing the dataset and therefore suffers from the same problem as `writer_batch_size`) ### Steps to reproduce the bug Hug ram usage but fast (but actually slow when saving the dataset): ```py from datasets import load_dataset import time ds = load_dataset("WaveGenAI/audios2", split="train[:50]") # map the dataset def transcribe_audio(row): audio = row["audio"] # get the audio but do nothing with it row["transcribed"] = True return row time1 = time.time() ds = ds.map( transcribe_audio ) for row in ds: pass # do nothing, just iterate to trigger the map function print(f"Time taken: {time.time() - time1:.2f} seconds") ``` Low ram usage but very very slow: ```py from datasets import load_dataset import time ds = load_dataset("WaveGenAI/audios2", split="train[:50]") # map the dataset def transcribe_audio(row): audio = row["audio"] # get the audio but do nothing with it row["transcribed"] = True return row time1 = time.time() ds = ds.map( transcribe_audio, writer_batch_size=10 ) # set low writer_batch_size to avoid memory issues for row in ds: pass # do nothing, just iterate to trigger the map function print(f"Time taken: {time.time() - time1:.2f} seconds") ``` ### Expected behavior I think the processing should be much faster, on only 50 audio examples, the mapping takes several minutes while nothing is done (just loading the audio). ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.10.4 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2024.6.1 # Extra The dataset has been generated by using audio folder, so I don't think anything specific in my code is causing this problem. ```py import argparse from datasets import load_dataset parser = argparse.ArgumentParser() parser.add_argument("--folder", help="folder path", default="/media/works/test/") args = parser.parse_args() dataset = load_dataset("audiofolder", data_dir=args.folder) # push the dataset to hub dataset.push_to_hub("WaveGenAI/audios") ``` Also, it's the combination of `audio = row["audio"]` and `row["transcribed"] = True` which causes problems, `row["transcribed"] = True `alone does nothing and `audio = row["audio"]` alone sometimes causes problems, sometimes not.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7117/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7117/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7116/comments
https://api.github.com/repos/huggingface/datasets/issues/7116/events
https://github.com/huggingface/datasets/issues/7116
2,475,522,721
I_kwDODunzps6TjXqh
7,116
datasets cannot handle nested json if features is given.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4", "events_url": "https://api.github.com/users/ljw20180420/events{/privacy}", "followers_url": "https://api.github.com/users/ljw20180420/followers", "following_url": "https://api.github.com/users/ljw20180420/following{/other_user}", "gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ljw20180420", "id": 38550511, "login": "ljw20180420", "node_id": "MDQ6VXNlcjM4NTUwNTEx", "organizations_url": "https://api.github.com/users/ljw20180420/orgs", "received_events_url": "https://api.github.com/users/ljw20180420/received_events", "repos_url": "https://api.github.com/users/ljw20180420/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions", "type": "User", "url": "https://api.github.com/users/ljw20180420", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cuts': [{\r\n \"cut1\": datasets.Value(\"uint16\"),\r\n \"cut2\": datasets.Value(\"uint16\")\r\n }]\r\n}))\r\n```", "> Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n> \r\n> ```python\r\n> ds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n> 'ref1': datasets.Value('string'),\r\n> 'ref2': datasets.Value('string'),\r\n> 'cuts': [{\r\n> \"cut1\": datasets.Value(\"uint16\"),\r\n> \"cut2\": datasets.Value(\"uint16\")\r\n> }]\r\n> }))\r\n> ```\r\nThank you!\r\n", "It works." ]
2024-08-20T12:27:49
2024-09-03T10:18:23
2024-09-03T10:18:07
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I have a json named temp.json. ```json {"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]} ``` I want to load it. ```python ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({ 'ref1': datasets.Value('string'), 'ref2': datasets.Value('string'), 'cuts': datasets.Sequence({ "cut1": datasets.Value("uint16"), "cut2": datasets.Value("uint16") }) })) ``` The above code does not work. However, I can load it without giving features. ```python ds = datasets.load_dataset('json', data_files="./temp.json") ``` Is it possible to load integers as uint16 to save some memory? ### Steps to reproduce the bug As in the bug description. ### Expected behavior The data are loaded and integers are uint16. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.21.0 - Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4", "events_url": "https://api.github.com/users/ljw20180420/events{/privacy}", "followers_url": "https://api.github.com/users/ljw20180420/followers", "following_url": "https://api.github.com/users/ljw20180420/following{/other_user}", "gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ljw20180420", "id": 38550511, "login": "ljw20180420", "node_id": "MDQ6VXNlcjM4NTUwNTEx", "organizations_url": "https://api.github.com/users/ljw20180420/orgs", "received_events_url": "https://api.github.com/users/ljw20180420/received_events", "repos_url": "https://api.github.com/users/ljw20180420/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions", "type": "User", "url": "https://api.github.com/users/ljw20180420", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7116/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7116/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7115/comments
https://api.github.com/repos/huggingface/datasets/issues/7115/events
https://github.com/huggingface/datasets/issues/7115
2,475,363,142
I_kwDODunzps6TiwtG
7,115
module 'pyarrow.lib' has no attribute 'ListViewType'
{ "avatar_url": "https://avatars.githubusercontent.com/u/175128880?v=4", "events_url": "https://api.github.com/users/neurafusionai/events{/privacy}", "followers_url": "https://api.github.com/users/neurafusionai/followers", "following_url": "https://api.github.com/users/neurafusionai/following{/other_user}", "gists_url": "https://api.github.com/users/neurafusionai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neurafusionai", "id": 175128880, "login": "neurafusionai", "node_id": "U_kgDOCnBBMA", "organizations_url": "https://api.github.com/users/neurafusionai/orgs", "received_events_url": "https://api.github.com/users/neurafusionai/received_events", "repos_url": "https://api.github.com/users/neurafusionai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neurafusionai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neurafusionai/subscriptions", "type": "User", "url": "https://api.github.com/users/neurafusionai", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "https://github.com/neurafusionai/Hugging_Face/blob/main/meta_opt_350m_customer_support_lora_v1.ipynb\r\n\r\ncouldnt train because of GPU\r\nI didnt pip install datasets -U\r\nbut looks like restarting worked" ]
2024-08-20T11:05:44
2024-09-10T06:51:08
2024-09-10T06:51:08
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Code: `!pipuninstall -y pyarrow !pip install --no-cache-dir pyarrow !pip uninstall -y pyarrow !pip install pyarrow --no-cache-dir !pip install --upgrade datasets transformers pyarrow !pip install pyarrow.parquet ! pip install pyarrow-core libparquet !pip install pyarrow --no-cache-dir !pip install pyarrow !pip install transformers !pip install --upgrade datasets !pip install datasets ! pip install pyarrow ! pip install pyarrow.lib ! pip install pyarrow.parquet !pip install transformers import pyarrow as pa print(pa.__version__) from datasets import load_dataset import pyarrow.parquet as pq import pyarrow.lib as lib import pandas as pd from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset from transformers import AutoTokenizer ! pip install pyarrow-core libparquet # Load the dataset for content moderation dataset = load_dataset("PolyAI/banking77") # Example dataset for customer support # Initialize the tokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") # Tokenize the dataset def tokenize_function(examples): return tokenizer(examples['text'], padding="max_length", truncation=True) # Apply tokenization to the entire dataset tokenized_datasets = dataset.map(tokenize_function, batched=True) # Check the first few tokenized samples print(tokenized_datasets['train'][0]) from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments # Load the model model = AutoModelForSequenceClassification.from_pretrained("facebook/opt-350m", num_labels=77) # Define training arguments training_args = TrainingArguments( output_dir="./results", per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, eval_strategy="epoch", # save_strategy="epoch", logging_dir="./logs", learning_rate=2e-5, ) # Initialize the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], ) # Train the model trainer.train() # Evaluate the model trainer.evaluate() ` AttributeError Traceback (most recent call last) [<ipython-input-23-60bed3143a93>](https://localhost:8080/#) in <cell line: 22>() 20 21 ---> 22 from datasets import load_dataset 23 import pyarrow.parquet as pq 24 import pyarrow.lib as lib 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 15 __version__ = "2.21.0" 16 ---> 17 from .arrow_dataset import Dataset 18 from .arrow_reader import ReadInstruction 19 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 74 75 from . import config ---> 76 from .arrow_reader import ArrowReader 77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 78 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 27 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 31 [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module> 18 # flake8: noqa 19 ---> 20 from .core import * [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module> 31 32 try: ---> 33 import pyarrow._parquet as _parquet 34 except ImportError as exc: 35 raise ImportError( /usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet() AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' ### Steps to reproduce the bug https://colab.research.google.com/drive/1HNbsg3tHxUJOHVtYIaRnNGY4T2PnLn4a?usp=sharing ### Expected behavior Looks like there is an issue with datasets and pyarrow ### Environment info google colab python huggingface Found existing installation: pyarrow 17.0.0 Uninstalling pyarrow-17.0.0: Successfully uninstalled pyarrow-17.0.0 Collecting pyarrow Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (3.3 kB) Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.10/dist-packages (from pyarrow) (1.26.4) Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (39.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 39.9/39.9 MB 188.9 MB/s eta 0:00:00 Installing collected packages: pyarrow ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible. ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible. Successfully installed pyarrow-17.0.0 WARNING: The following packages were previously imported in this runtime: [pyarrow] You must restart the runtime in order to use newly installed versions.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7115/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7115/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7113/comments
https://api.github.com/repos/huggingface/datasets/issues/7113/events
https://github.com/huggingface/datasets/issues/7113
2,475,029,640
I_kwDODunzps6ThfSI
7,113
Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4", "events_url": "https://api.github.com/users/memray/events{/privacy}", "followers_url": "https://api.github.com/users/memray/followers", "following_url": "https://api.github.com/users/memray/following{/other_user}", "gists_url": "https://api.github.com/users/memray/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/memray", "id": 4197249, "login": "memray", "node_id": "MDQ6VXNlcjQxOTcyNDk=", "organizations_url": "https://api.github.com/users/memray/orgs", "received_events_url": "https://api.github.com/users/memray/received_events", "repos_url": "https://api.github.com/users/memray/repos", "site_admin": false, "starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/memray/subscriptions", "type": "User", "url": "https://api.github.com/users/memray", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "That's expected behavior, it's also the same in `torch`:\r\n\r\n```python\r\n>>> list(DataLoader(list(range(5)), batch_size=10, drop_last=True))\r\n[]\r\n```" ]
2024-08-20T08:26:40
2024-08-26T04:24:11
2024-08-26T04:24:10
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Hi there, I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains. Please see the code below to reproduce the problem. The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False. I have to use drop_last_batch=True since it's for distributed training. ### Steps to reproduce the bug ```python # datasets==2.21.0 import datasets def data_prepare(examples): print(examples["sentence1"][0]) return examples batch_size = 101 # the size of the dataset is 100 # the dataset iterates correctly if we set either streaming=False or drop_last_batch=False dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True) dataset = dataset.map(lambda x: data_prepare(x), drop_last_batch=True, batched=True, batch_size=batch_size) for ex in dataset: print(ex) pass ``` ### Expected behavior The dataset iterates regardless of the batch size. ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7113/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7113/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7112/comments
https://api.github.com/repos/huggingface/datasets/issues/7112/events
https://github.com/huggingface/datasets/issues/7112
2,475,004,644
I_kwDODunzps6ThZLk
7,112
cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/174590283?v=4", "events_url": "https://api.github.com/users/SoumyaMB10/events{/privacy}", "followers_url": "https://api.github.com/users/SoumyaMB10/followers", "following_url": "https://api.github.com/users/SoumyaMB10/following{/other_user}", "gists_url": "https://api.github.com/users/SoumyaMB10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SoumyaMB10", "id": 174590283, "login": "SoumyaMB10", "node_id": "U_kgDOCmgJSw", "organizations_url": "https://api.github.com/users/SoumyaMB10/orgs", "received_events_url": "https://api.github.com/users/SoumyaMB10/received_events", "repos_url": "https://api.github.com/users/SoumyaMB10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SoumyaMB10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SoumyaMB10/subscriptions", "type": "User", "url": "https://api.github.com/users/SoumyaMB10", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@sayakpaul please advice ", "Hits the same dependency conflict" ]
2024-08-20T08:13:55
2024-09-20T15:30:03
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug !pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible. ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible. to solve above error !pip install pyarrow==14.0.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible. ### Steps to reproduce the bug !pip install datasets>=2.19.1 ### Expected behavior run without dependency error ### Environment info Diffusers version: 0.31.0.dev0 Platform: Linux-6.1.85+-x86_64-with-glibc2.35 Running on Google Colab?: Yes Python version: 3.10.12 PyTorch version (GPU?): 2.3.1+cu121 (True) Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu) Jax version: 0.4.26 JaxLib version: 0.4.26 Huggingface_hub version: 0.23.5 Transformers version: 4.42.4 Accelerate version: 0.32.1 PEFT version: 0.7.0 Bitsandbytes version: not installed Safetensors version: 0.4.4 xFormers version: not installed Accelerator: Tesla T4, 15360 MiB Using GPU in script?: Using distributed or parallel set-up in script?:
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7112/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7112/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7111/comments
https://api.github.com/repos/huggingface/datasets/issues/7111/events
https://github.com/huggingface/datasets/issues/7111
2,474,915,845
I_kwDODunzps6ThDgF
7,111
CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Note that the CI before was using:\r\n- llvmlite: 0.43.0\r\n- numba: 0.60.0\r\n\r\nNow it tries to use:\r\n- llvmlite: 0.34.0\r\n- numba: 0.51.2", "The issue is because numba-0.60.0 pins numpy<2.1 and `uv` tries to install latest numpy-2.1.0 with an old numba-0.51.0 version (and llvmlite-0.34.0). See discussion in their repo:\r\n- https://github.com/numba/numba/issues/9708\r\n\r\nLatest numpy-2.1.0 will be supported by the next numba-0.61.0 release in September.\r\n\r\nNote that our CI requires numba with the \"audio\" extra:\r\n- librosa > numba" ]
2024-08-20T07:27:28
2024-08-21T05:05:36
2024-08-20T09:02:36
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269 ``` Run uv pip install --system "datasets[tests_numpy2] @ ." Resolved 150 packages in 4.42s error: Failed to prepare distributions Caused by: Failed to fetch wheel: llvmlite==0.34.0 Caused by: Build backend failed to build wheel through `build_wheel()` with exit status: 1 --- stdout: running bdist_wheel /home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python /home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py LLVM version... --- stderr: Traceback (most recent call last): File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 105, in main_posix out = subprocess.check_output([llvm_config, '--version']) File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 421, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 503, in run with Popen(*popenargs, **kwargs) as process: File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 971, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 1863, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 191, in <module> main() File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 181, in main main_posix('linux', '.so') File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 107, in main_posix raise RuntimeError("%s failed executing, please point LLVM_CONFIG " RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config error: command '/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python' failed with exit code 1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7111/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7111/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7109/comments
https://api.github.com/repos/huggingface/datasets/issues/7109/events
https://github.com/huggingface/datasets/issues/7109
2,473,367,848
I_kwDODunzps6TbJko
7,109
ConnectionError for gated datasets and unauthenticated users
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-08-19T13:27:45
2024-08-20T09:14:36
2024-08-20T09:14:35
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852 We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before). See: - https://github.com/huggingface/dataset-viewer/issues/3025 - https://github.com/huggingface/huggingface_hub/issues/2457
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7109/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7109/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7108/comments
https://api.github.com/repos/huggingface/datasets/issues/7108/events
https://github.com/huggingface/datasets/issues/7108
2,470,665,327
I_kwDODunzps6TQ1xv
7,108
website broken: Create a new dataset repository, doesn't create a new repo in Firefox
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?", "I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Create dataset` works.\r\n\r\nIt seems to be a Firefox specific issue.", "I have updated Firefox 129.0 (64 bit), and now the `Create dataset` is working again in Firefox.\r\n\r\nUX: It would be nice with better error messages on HuggingFace.", "maybe an issue with the cookie. cc @Wauplin @coyotte508 " ]
2024-08-16T17:23:00
2024-08-19T13:21:12
2024-08-19T06:52:48
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug This issue is also reported here: /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fcreate-a-new-dataset-repository-broken-page%2F102644 This page is broken. https://huggingface.co/new-dataset I fill in the form with my text, and click `Create Dataset`. ![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6) Then the form gets wiped. And no repo got created. No error message visible in the developer console. ![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3) # Idea for improvement For better UX, if the repo cannot be created, then show an error message, that something went wrong. # Work around, that works for me ```python from huggingface_hub import HfApi, HfFolder repo_id = 'simon-arc-solve-fractal-v3' api = HfApi() username = api.whoami()['name'] repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset") ``` ### Steps to reproduce the bug Go https://huggingface.co/new-dataset Fill in the form. Click `Create dataset`. Now the form is cleared. And the page doesn't jump anywhere. ### Expected behavior The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo. ### Environment info Firefox 128.0.3 (64-bit) macOS Sonoma 14.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7108/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7108/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7107/comments
https://api.github.com/repos/huggingface/datasets/issues/7107/events
https://github.com/huggingface/datasets/issues/7107
2,470,444,732
I_kwDODunzps6TP_68
7,107
load_dataset broken in 2.21.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1911631?v=4", "events_url": "https://api.github.com/users/anjor/events{/privacy}", "followers_url": "https://api.github.com/users/anjor/followers", "following_url": "https://api.github.com/users/anjor/following{/other_user}", "gists_url": "https://api.github.com/users/anjor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anjor", "id": 1911631, "login": "anjor", "node_id": "MDQ6VXNlcjE5MTE2MzE=", "organizations_url": "https://api.github.com/users/anjor/orgs", "received_events_url": "https://api.github.com/users/anjor/received_events", "repos_url": "https://api.github.com/users/anjor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anjor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anjor/subscriptions", "type": "User", "url": "https://api.github.com/users/anjor", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now", "+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.", "I tried adding a simple test to `test_load.py` with the alpaca eval dataset but the test didn't fail :(. \r\n\r\nSo looks like this might have something to do with the environment? ", "There was an issue with the script of the \"tatsu-lab/alpaca_eval\" dataset.\r\n\r\nI was fixed with this PR: \r\n- [Fix FileNotFoundError](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/discussions/2)\r\n\r\nIt should work now if you retry to load the dataset." ]
2024-08-16T14:59:51
2024-08-18T09:28:43
2024-08-18T09:27:12
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` used to work till 2.20.0 but doesn't work in 2.21.0 In 2.20.0: ![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9) in 2.21.0: ![Screenshot 2024-08-16 at 3 57 24 PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f) ### Steps to reproduce the bug 1. Spin up a new google collab 2. `pip install datasets==2.21.0` 3. `import datasets` 4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` 5. Will throw an error. ### Expected behavior Try steps 1-5 again but replace datasets version with 2.20.0, it will work ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.5 - PyArrow version: 17.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7107/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7107/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7102/comments
https://api.github.com/repos/huggingface/datasets/issues/7102/events
https://github.com/huggingface/datasets/issues/7102
2,466,893,106
I_kwDODunzps6TCc0y
7,102
Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True)
{ "avatar_url": "https://avatars.githubusercontent.com/u/13192126?v=4", "events_url": "https://api.github.com/users/lajd/events{/privacy}", "followers_url": "https://api.github.com/users/lajd/followers", "following_url": "https://api.github.com/users/lajd/following{/other_user}", "gists_url": "https://api.github.com/users/lajd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lajd", "id": 13192126, "login": "lajd", "node_id": "MDQ6VXNlcjEzMTkyMTI2", "organizations_url": "https://api.github.com/users/lajd/orgs", "received_events_url": "https://api.github.com/users/lajd/received_events", "repos_url": "https://api.github.com/users/lajd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lajd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lajd/subscriptions", "type": "User", "url": "https://api.github.com/users/lajd", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi @lajd , I was skeptical about how we are saving the shards each as their own dataset (arrow file) in the script above, and so I updated the script to try out saving the shards in a few different file formats. From the experiments I ran, I saw binary format show significantly the best performance, with arrow and parquet about the same. However, I was unable to reproduce a drastically slower iteration speed after shuffling in any case when using the revised script -- pasting below:\r\n\r\n```python\r\nimport time\r\nfrom datasets import load_dataset, Dataset, IterableDataset\r\nfrom pathlib import Path\r\nimport torch\r\nimport pandas as pd\r\nimport pickle\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\n\r\ndef generate_random_example():\r\n return {\r\n 'inputs': torch.randn(128).tolist(),\r\n 'indices': torch.randint(0, 10000, (2, 20000)).tolist(),\r\n 'values': torch.randn(20000).tolist(),\r\n }\r\n\r\n\r\ndef generate_shard_data(examples_per_shard: int = 512):\r\n return [generate_random_example() for _ in range(examples_per_shard)]\r\n\r\n\r\ndef save_shard_as_arrow(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a Hugging Face Dataset\r\n dataset = Dataset.from_dict({\r\n 'inputs': [example['inputs'] for example in shard_data],\r\n 'indices': [example['indices'] for example in shard_data],\r\n 'values': [example['values'] for example in shard_data],\r\n })\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}\"\r\n\r\n # Save the dataset to disk using the Arrow format\r\n dataset.save_to_disk(str(shard_write_path))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_parquet(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a pandas DataFrame for easy conversion to Parquet\r\n df = pd.DataFrame(shard_data)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.parquet\"\r\n\r\n # Convert DataFrame to PyArrow Table for Parquet saving\r\n table = pa.Table.from_pandas(df)\r\n\r\n # Save the table as a Parquet file\r\n pq.write_table(table, shard_write_path)\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_binary(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.bin\"\r\n\r\n # Save each example as a serialized binary object using pickle\r\n with open(shard_write_path, 'wb') as f:\r\n for example in shard_data:\r\n f.write(pickle.dumps(example))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef generate_split_shards(save_dir, filetype=\"parquet\", num_shards: int = 16, examples_per_shard: int = 512):\r\n shard_filepaths = []\r\n for shard_idx in range(num_shards):\r\n if filetype == \"parquet\":\r\n shard_filepaths.append(save_shard_as_parquet(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"binary\":\r\n shard_filepaths.append(save_shard_as_binary(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"arrow\":\r\n shard_filepaths.append(save_shard_as_arrow(shard_idx, save_dir, examples_per_shard))\r\n else:\r\n raise ValueError(f\"Unsupported filetype: {filetype}. Choose either 'parquet' or 'binary'.\")\r\n return shard_filepaths\r\n\r\n\r\ndef _binary_dataset_generator(files):\r\n for filepath in files:\r\n with open(filepath, 'rb') as f:\r\n while True:\r\n try:\r\n example = pickle.load(f)\r\n yield example\r\n except EOFError:\r\n break\r\n\r\n\r\ndef load_binary_dataset(shard_filepaths):\r\n return IterableDataset.from_generator(\r\n _binary_dataset_generator, gen_kwargs={\"files\": shard_filepaths},\r\n )\r\n\r\n\r\ndef load_parquet_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n return load_dataset(\r\n \"parquet\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_arrow_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n shard_filepaths = [f + \"/data-00000-of-00001.arrow\" for f in shard_filepaths]\r\n return load_dataset(\r\n \"arrow\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_dataset_wrapper(filetype: str, shard_filepaths: list[str]):\r\n if filetype == \"parquet\":\r\n return load_parquet_dataset(shard_filepaths)\r\n if filetype == \"binary\":\r\n return load_binary_dataset(shard_filepaths)\r\n if filetype == \"arrow\":\r\n return load_arrow_dataset(shard_filepaths)\r\n else:\r\n raise ValueError(\"Unsupported filetype\")\r\n\r\n\r\n# Example usage:\r\nsplit = \"train\"\r\nsplit_save_dir = \"/tmp/random_split\"\r\n\r\nfiletype = \"binary\" # or \"parquet\", or \"arrow\"\r\nnum_shards = 16\r\n\r\nshard_filepaths = generate_split_shards(split_save_dir, filetype=filetype, num_shards=num_shards)\r\ndataset = load_dataset_wrapper(filetype=filetype, shard_filepaths=shard_filepaths)\r\n\r\ndataset = dataset.shuffle(buffer_size=100, seed=42)\r\n\r\nstart_time = time.time()\r\nfor count, item in enumerate(dataset):\r\n if count > 0 and count % 100 == 0:\r\n elapsed_time = time.time() - start_time\r\n iterations_per_second = count / elapsed_time\r\n print(f\"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second\")\r\n```", "update: I was able to reproduce the issue you described -- but ONLY if I do \r\n\r\n```\r\nrandom_dataset = random_dataset.with_format(\"numpy\")\r\n```\r\n\r\nIf I do this, I see similar numbers as what you reported. If I do not use numpy format, parquet and arrow are about 17 iterations per second regardless of whether or not we shuffle. Using binary, (again no numpy format tried with this yet), still shows the fastest speeds on average (shuffle and no shuffle) of about 850 it/sec.\r\n\r\nI suspect some issues with arrow and numpy being optimized for sequential reads, and shuffling cuases issuses... hmm" ]
2024-08-14T21:44:44
2024-08-15T16:17:31
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug When I load a dataset from a number of arrow files, as in: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) ``` I'm able to get fast iteration speeds when iterating over the dataset without shuffling. When I shuffle the dataset, the iteration speed is reduced by ~1000x. It's very possible the way I'm loading dataset shards is not appropriate; if so please advise! Thanks for the help ### Steps to reproduce the bug Here's full code to reproduce the issue: - Generate a random dataset - Create shards of data independently using Dataset.save_to_disk() - The below will generate 16 shards (arrow files), of 512 examples each ``` import time from pathlib import Path from multiprocessing import Pool, cpu_count import torch from datasets import Dataset, load_dataset split = "train" split_save_dir = "/tmp/random_split" def generate_random_example(): return { 'inputs': torch.randn(128).tolist(), 'indices': torch.randint(0, 10000, (2, 20000)).tolist(), 'values': torch.randn(20000).tolist(), } def generate_shard_dataset(examples_per_shard: int = 512): dataset_dict = { 'inputs': [], 'indices': [], 'values': [] } for _ in range(examples_per_shard): example = generate_random_example() dataset_dict['inputs'].append(example['inputs']) dataset_dict['indices'].append(example['indices']) dataset_dict['values'].append(example['values']) return Dataset.from_dict(dataset_dict) def save_shard(shard_idx, save_dir, examples_per_shard): shard_dataset = generate_shard_dataset(examples_per_shard) shard_write_path = Path(save_dir) / f"shard_{shard_idx}" shard_dataset.save_to_disk(shard_write_path) return str(Path(shard_write_path) / "data-00000-of-00001.arrow") def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512): with Pool(cpu_count()) as pool: args = [(m, save_dir, examples_per_shard) for m in range(num_shards)] shard_filepaths = pool.starmap(save_shard, args) return shard_filepaths shard_filepaths = generate_split_shards(split_save_dir) ``` Load the dataset as IterableDataset: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) random_dataset = random_dataset.with_format("numpy") ``` Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating: Without shuffling, this gives ~1500 iterations/second ``` start_time = time.time() for count, item in enumerate(random_dataset): if count > 0 and count % 100 == 0: elapsed_time = time.time() - start_time iterations_per_second = count / elapsed_time print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second") ``` ``` Processed 100 items at an average of 705.74 iterations/second Processed 200 items at an average of 1169.68 iterations/second Processed 300 items at an average of 1497.97 iterations/second Processed 400 items at an average of 1739.62 iterations/second Processed 500 items at an average of 1931.11 iterations/second` ``` When shuffling, this gives ~3 iterations/second: ``` random_dataset = random_dataset.shuffle(buffer_size=100,seed=42) start_time = time.time() for count, item in enumerate(random_dataset): if count > 0 and count % 100 == 0: elapsed_time = time.time() - start_time iterations_per_second = count / elapsed_time print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second") ``` ``` Processed 100 items at an average of 3.75 iterations/second Processed 200 items at an average of 3.93 iterations/second ``` ### Expected behavior Iterations per second should be barely affected by shuffling, especially with a small buffer size ### Environment info Datasets version: 2.21.0 Python 3.10 Ubuntu 22.04
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7102/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7102/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7101/comments
https://api.github.com/repos/huggingface/datasets/issues/7101/events
https://github.com/huggingface/datasets/issues/7101
2,466,510,783
I_kwDODunzps6TA_e_
7,101
`load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present
{ "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hlky", "id": 106811348, "login": "hlky", "node_id": "U_kgDOBl3P1A", "organizations_url": "https://api.github.com/users/hlky/orgs", "received_events_url": "https://api.github.com/users/hlky/received_events", "repos_url": "https://api.github.com/users/hlky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "type": "User", "url": "https://api.github.com/users/hlky", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Having looked into this further it seems the core of the issue is with two different formats in the same repo.\r\n\r\nWhen the `parquet` config is first, the `WebDataset`s are loaded as `parquet`, if the `WebDataset` configs are first, the `parquet` is loaded as `WebDataset`.\r\n\r\nA workaround in my case would be to just turn the `parquet` into a `WebDataset`, although I'd still need the Dataset Viewer config limit increasing. In other cases using the same format may not be possible.\r\n\r\nRelevant code: \r\n- [HubDatasetModuleFactoryWithoutScript](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/load.py#L964)\r\n- [get_data_patterns](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/data_files.py#L415)" ]
2024-08-14T18:12:25
2024-08-18T10:33:38
null
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets: ```yaml configs: - config_name: dataception data_files: - path: dataception.parquet split: train default: true - config_name: dataset_5423 data_files: - path: datasets/5423.tar split: train ... - config_name: dataset_721736 data_files: - path: datasets/721736.tar split: train ``` The intent was for metadata to be browsable via Dataset Viewer, in addition to each individual dataset, and to allow datasets to be loaded by specifying the config/name to `load_dataset`. While testing `load_dataset` I encountered the following error: ```python >>> dataset = load_dataset("bigdata-pw/Dataception", "dataset_7691") Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 467k/467k [00:00<00:00, 1.99MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.0M/71.0M [00:02<00:00, 26.8MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "datasets\load.py", line 2145, in load_dataset builder_instance.download_and_prepare( File "datasets\builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "datasets\builder.py", line 1100, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "datasets\packaged_modules\parquet\parquet.py", line 58, in _split_generators self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f)) ^^^^^^^^^^^^^^^^^ File "pyarrow\parquet\core.py", line 2325, in read_schema file = ParquetFile( ^^^^^^^^^^^^ File "pyarrow\parquet\core.py", line 318, in __init__ self.reader.open( File "pyarrow\_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` The correct file is downloaded, however the incorrect builder type is detected; `parquet` due to other content of the repository. It would appear that the config needs to be taken into account. Note that I have removed the additional configs from the repository because of this issue and there is a limit of 3000 configs anyway so the Dataset Viewer doesn't work as I intended. I'll add them back in if it assists with testing.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7101/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7101/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7100/comments
https://api.github.com/repos/huggingface/datasets/issues/7100/events
https://github.com/huggingface/datasets/issues/7100
2,465,529,414
I_kwDODunzps6S9P5G
7,100
IterableDataset: cannot resolve features from list of numpy arrays
{ "avatar_url": "https://avatars.githubusercontent.com/u/18899212?v=4", "events_url": "https://api.github.com/users/VeryLazyBoy/events{/privacy}", "followers_url": "https://api.github.com/users/VeryLazyBoy/followers", "following_url": "https://api.github.com/users/VeryLazyBoy/following{/other_user}", "gists_url": "https://api.github.com/users/VeryLazyBoy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VeryLazyBoy", "id": 18899212, "login": "VeryLazyBoy", "node_id": "MDQ6VXNlcjE4ODk5MjEy", "organizations_url": "https://api.github.com/users/VeryLazyBoy/orgs", "received_events_url": "https://api.github.com/users/VeryLazyBoy/received_events", "repos_url": "https://api.github.com/users/VeryLazyBoy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VeryLazyBoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VeryLazyBoy/subscriptions", "type": "User", "url": "https://api.github.com/users/VeryLazyBoy", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Assign this issue to me under Hacktoberfest with hacktoberfest label inserted on the issue" ]
2024-08-14T11:01:51
2024-10-03T05:47:23
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error. ``` Traceback (most recent call last): File "test.py", line 6 iter_ds = iter_ds._resolve_features() File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2876, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 63, in _infer_features_from_batch pa_table = pa.Table.from_pydict(batch) File "pyarrow/table.pxi", line 1813, in pyarrow.lib._Tabular.from_pydict File "pyarrow/table.pxi", line 5339, in pyarrow.lib._from_pydict File "pyarrow/array.pxi", line 374, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 344, in pyarrow.lib.array File "pyarrow/array.pxi", line 42, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values ``` ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np # create list of numpy iter_ds = Dataset.from_dict({'a': [[[1, 2, 3], [1, 2, 3]]]}).to_iterable_dataset().map(lambda x: {'a': [np.array(x['a'])]}) iter_ds = iter_ds._resolve_features() # errors here ``` ### Expected behavior features can be successfully resolved ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7100/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7100/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7097/comments
https://api.github.com/repos/huggingface/datasets/issues/7097/events
https://github.com/huggingface/datasets/issues/7097
2,458,455,489
I_kwDODunzps6SiQ3B
7,097
Some of DownloadConfig's properties are always being overridden in load.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/29772899?v=4", "events_url": "https://api.github.com/users/ductai199x/events{/privacy}", "followers_url": "https://api.github.com/users/ductai199x/followers", "following_url": "https://api.github.com/users/ductai199x/following{/other_user}", "gists_url": "https://api.github.com/users/ductai199x/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ductai199x", "id": 29772899, "login": "ductai199x", "node_id": "MDQ6VXNlcjI5NzcyODk5", "organizations_url": "https://api.github.com/users/ductai199x/orgs", "received_events_url": "https://api.github.com/users/ductai199x/received_events", "repos_url": "https://api.github.com/users/ductai199x/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ductai199x/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ductai199x/subscriptions", "type": "User", "url": "https://api.github.com/users/ductai199x", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-08-09T18:26:37
2024-08-09T18:26:37
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded. See this image below: ![image](https://github.com/user-attachments/assets/9e76ebb7-09b1-4c95-adc8-a959b536f93c) ### Steps to reproduce the bug 1. Have a local dataset that contains archived files (zip, tar.gz, etc) 2. Build a dataset loading script to download and extract these files 3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False 4. The extraction process will start no matter if the archives was extracted previously ### Expected behavior The extraction process should not run when the archives were previously extracted and `force_extract` is set to False. ### Environment info datasets==2.20.0 python3.9
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7097/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7097/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7093/comments
https://api.github.com/repos/huggingface/datasets/issues/7093/events
https://github.com/huggingface/datasets/issues/7093
2,454,413,074
I_kwDODunzps6SS18S
7,093
Add Arabic Docs to datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4", "events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}", "followers_url": "https://api.github.com/users/AhmedAlmaghz/followers", "following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}", "gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AhmedAlmaghz", "id": 53489256, "login": "AhmedAlmaghz", "node_id": "MDQ6VXNlcjUzNDg5MjU2", "organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs", "received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events", "repos_url": "https://api.github.com/users/AhmedAlmaghz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions", "type": "User", "url": "https://api.github.com/users/AhmedAlmaghz", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2024-08-07T21:48:05
2024-08-07T21:48:05
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Add Arabic Docs to datasets [Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) ### Motivation @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx ### Your contribution @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7093/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7093/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7092/comments
https://api.github.com/repos/huggingface/datasets/issues/7092/events
https://github.com/huggingface/datasets/issues/7092
2,451,393,658
I_kwDODunzps6SHUx6
7,092
load_dataset with multiple jsonlines files interprets datastructure too early
{ "avatar_url": "https://avatars.githubusercontent.com/u/23384483?v=4", "events_url": "https://api.github.com/users/Vipitis/events{/privacy}", "followers_url": "https://api.github.com/users/Vipitis/followers", "following_url": "https://api.github.com/users/Vipitis/following{/other_user}", "gists_url": "https://api.github.com/users/Vipitis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Vipitis", "id": 23384483, "login": "Vipitis", "node_id": "MDQ6VXNlcjIzMzg0NDgz", "organizations_url": "https://api.github.com/users/Vipitis/orgs", "received_events_url": "https://api.github.com/users/Vipitis/received_events", "repos_url": "https://api.github.com/users/Vipitis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Vipitis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vipitis/subscriptions", "type": "User", "url": "https://api.github.com/users/Vipitis", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "I’ll take a look", "Possible definitions of done for this issue:\r\n\r\n1. A fix so you can load your dataset specifically\r\n2. A general fix for datasets similar to this in the `datasets` library\r\n\r\nOption 1 is trivial. I think option 2 requires significant changes to the library.\r\n\r\nSince you outlined something akin to option 2 in `Expected behavior` I'm assuming that's what you'd like to see done. Is that right?\r\n\r\nIn the meantime, here's a solution for option 1:\r\n\r\n```python\r\nimport datasets\r\n\r\ndata_dir = './data/annotated/api'\r\n\r\nfeatures = datasets.Features({'id': datasets.Value(dtype='string'),\r\n 'name': datasets.Value(dtype='string'),\r\n 'author': datasets.Value(dtype='string'),\r\n 'description': datasets.Value(dtype='string'),\r\n 'tags': datasets.Sequence(feature=datasets.Value(dtype='string'), length=-1),\r\n 'likes': datasets.Value(dtype='int64'),\r\n 'viewed': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'date': datasets.Value(dtype='string'),\r\n 'time_retrieved': datasets.Value(dtype='string'),\r\n 'image_code': datasets.Value(dtype='string'),\r\n 'image_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'common_code': datasets.Value(dtype='string'),\r\n 'sound_code': datasets.Value(dtype='string'),\r\n 'sound_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_a_code': datasets.Value(dtype='string'),\r\n 'buffer_a_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_b_code': datasets.Value(dtype='string'),\r\n 'buffer_b_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_c_code': datasets.Value(dtype='string'),\r\n 'buffer_c_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_d_code': datasets.Value(dtype='string'),\r\n 'buffer_d_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'cube_a_code': datasets.Value(dtype='string'),\r\n 'cube_a_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'thumbnail': datasets.Value(dtype='string'),\r\n 'access': datasets.Value(dtype='string'),\r\n 'license': datasets.Value(dtype='string'),\r\n 'functions': datasets.Sequence(feature=datasets.Sequence(feature=datasets.Value(dtype='int64'), length=-1), length=-1),\r\n 'test': datasets.Value(dtype='string')})\r\n\r\ndatasets.load_dataset('json', data_dir=data_dir, features=features)\r\n```", "As pointed out by @hvaara, you can define explicit features so that you avoid the `datasets` library having to infer them (from the first few samples).\r\n\r\nNote that the feature inference is done from the first few samples of JSON-Lines on purpose, so that the entire data does not need to be parsed twice (it would be inefficient for very large datasets).", "I understand this. But can there be a solution that doesn't require the end user to write this shema by hand(in my case there is some fields that contain a nested structure)? \r\n\r\nMaybe offer an option to infer the shema automatically before loading the dataset. Or perhaps - trigger such a method when this error arises? \r\n\r\nIs this \"first few files\" heuristics accessible via kwargs perhaps. Maybe an error that says \r\n`Cloud not cast some structure into feature shema, consider increasing shema_files to a large number or all\".\r\n\r\nThere might be efficient implementations to solve this problem for larger datasets. ", "@Vipitis raised a good point on the HF Discord regarding the use of a [dataset script](https://huggingface.co/docs/datasets/en/dataset_script) to provide the schema during initialization. Using this approach requires setting `trust_remote_code=True`, which is not allowed in certain evaluation frameworks.\r\n\r\nFor cases where using a dataset script is acceptable, would it be helpful to add functionality to the library (not necessarily in `load_dataset`) that can automatically discover the feature definitions and output them, so you don't have to manually define them?\r\n\r\nAlternatively, for situations where features need to be known at load-time without using a dataset script, another option could be loading the dataset schema from a file format that doesn't require `trust_remote_code=True`." ]
2024-08-06T17:42:55
2024-08-08T16:35:01
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug likely related to #6460 using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data. ### Steps to reproduce the bug real world example: data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure. ```python from datasets import load_dataset ds = load_dataset("json", data_dir="./data/annotated/api") ``` you get a long error trace, where in the middle it says something like ```cs TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null ``` toy example: (on request) ### Expected behavior Some suggestions 1. give a better error message to the user 2. consider all files before deciding on a data structure for a given column. 3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow) as a workaround I have lazily implemented the following (essentially step 2) ```python import os import jsonlines import datasets api_files = os.listdir("./data/annotated/api") api_files = [f"./data/annotated/api/{f}" for f in api_files] api_file_contents = [] for f in api_files: with jsonlines.open(f) as reader: for obj in reader: api_file_contents.append(obj) ds = datasets.Dataset.from_list(api_file_contents) ``` this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place). ### Environment info - `datasets` version: 2.20.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7092/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7092/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7090/comments
https://api.github.com/repos/huggingface/datasets/issues/7090/events
https://github.com/huggingface/datasets/issues/7090
2,449,699,490
I_kwDODunzps6SA3Ki
7,090
The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name
{ "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yurivict", "id": 271906, "login": "yurivict", "node_id": "MDQ6VXNlcjI3MTkwNg==", "organizations_url": "https://api.github.com/users/yurivict/orgs", "received_events_url": "https://api.github.com/users/yurivict/received_events", "repos_url": "https://api.github.com/users/yurivict/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "type": "User", "url": "https://api.github.com/users/yurivict", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-08-06T00:35:05
2024-08-06T00:35:05
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11 Failure: ``` if err_filename is not None: > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFoundError: [Errno 2] No such file or directory: 'python' ``` ### Steps to reproduce the bug regular test run using PyTest ### Expected behavior n/a ### Environment info FreeBSD 14.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7090/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7090/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7089/comments
https://api.github.com/repos/huggingface/datasets/issues/7089/events
https://github.com/huggingface/datasets/issues/7089
2,449,479,500
I_kwDODunzps6SABdM
7,089
Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped
{ "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yurivict", "id": 271906, "login": "yurivict", "node_id": "MDQ6VXNlcjI3MTkwNg==", "organizations_url": "https://api.github.com/users/yurivict/orgs", "received_events_url": "https://api.github.com/users/yurivict/received_events", "repos_url": "https://api.github.com/users/yurivict/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "type": "User", "url": "https://api.github.com/users/yurivict", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-08-05T21:05:11
2024-08-05T21:05:11
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug see the subject ### Steps to reproduce the bug regular tests ### Expected behavior n/a ### Environment info version 2.20.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7089/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7089/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7088/comments
https://api.github.com/repos/huggingface/datasets/issues/7088/events
https://github.com/huggingface/datasets/issues/7088
2,447,383,940
I_kwDODunzps6R4B2E
7,088
Disable warning when using with_format format on tensors
{ "avatar_url": "https://avatars.githubusercontent.com/u/42048782?v=4", "events_url": "https://api.github.com/users/Haislich/events{/privacy}", "followers_url": "https://api.github.com/users/Haislich/followers", "following_url": "https://api.github.com/users/Haislich/following{/other_user}", "gists_url": "https://api.github.com/users/Haislich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Haislich", "id": 42048782, "login": "Haislich", "node_id": "MDQ6VXNlcjQyMDQ4Nzgy", "organizations_url": "https://api.github.com/users/Haislich/orgs", "received_events_url": "https://api.github.com/users/Haislich/received_events", "repos_url": "https://api.github.com/users/Haislich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Haislich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Haislich/subscriptions", "type": "User", "url": "https://api.github.com/users/Haislich", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2024-08-05T00:45:50
2024-08-05T00:45:50
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request If we write this code: ```python """Get data and define datasets.""" from enum import StrEnum from datasets import load_dataset from torch.utils.data import DataLoader from torchvision import transforms class Split(StrEnum): """Describes what type of split to use in the dataloader""" TRAIN = "train" TEST = "test" VAL = "validation" class ImageNetDataLoader(DataLoader): """Create an ImageNetDataloader""" _preprocess_transform = transforms.Compose( [ transforms.Resize(256), transforms.CenterCrop(224), ] ) def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN): dataset = ( load_dataset( "imagenet-1k", split=split, trust_remote_code=True, streaming=True, ) .with_format("torch") .map(self._preprocess) ) super().__init__(dataset=dataset, batch_size=batch_size) def _preprocess(self, data): if data["image"].shape[0] < 3: data["image"] = data["image"].repeat(3, 1, 1) data["image"] = self._preprocess_transform(data["image"].float()) return data if __name__ == "__main__": dataloader = ImageNetDataLoader(batch_size=2) for batch in dataloader: print(batch["image"]) break ``` This will trigger an user warning : ```bash datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` ### Motivation This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`. This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`. In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it: - https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor - https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary. ### Your contribution A solution that I found to be working is to change the current way of doing it: ```python return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` To: ```python if (isinstance(value, torch.Tensor)): tensor = value.clone().detach() if self.torch_tensor_kwargs.get('requires_grad', False): tensor.requires_grad_() return tensor else: return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ```
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7088/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7088/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7087/comments
https://api.github.com/repos/huggingface/datasets/issues/7087/events
https://github.com/huggingface/datasets/issues/7087
2,447,158,643
I_kwDODunzps6R3K1z
7,087
Unable to create dataset card for Lushootseed language
{ "avatar_url": "https://avatars.githubusercontent.com/u/134876525?v=4", "events_url": "https://api.github.com/users/vaishnavsudarshan/events{/privacy}", "followers_url": "https://api.github.com/users/vaishnavsudarshan/followers", "following_url": "https://api.github.com/users/vaishnavsudarshan/following{/other_user}", "gists_url": "https://api.github.com/users/vaishnavsudarshan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vaishnavsudarshan", "id": 134876525, "login": "vaishnavsudarshan", "node_id": "U_kgDOCAoNbQ", "organizations_url": "https://api.github.com/users/vaishnavsudarshan/orgs", "received_events_url": "https://api.github.com/users/vaishnavsudarshan/received_events", "repos_url": "https://api.github.com/users/vaishnavsudarshan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vaishnavsudarshan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vaishnavsudarshan/subscriptions", "type": "User", "url": "https://api.github.com/users/vaishnavsudarshan", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Thanks for reporting.\r\n\r\nIt is weird, because the language entry is in the list. See: https://github.com/huggingface/huggingface.js/blob/98e32f0ed4ee057a596f66a1dec738e5db9643d5/packages/languages/src/languages_iso_639_3.ts#L15186-L15189\r\n\r\nI have reported the issue:\r\n- https://github.com/huggingface/huggingface.js/issues/834\r\n\r\n", "As explained in the reported issue above, the problem only appears in the autocomplete field: you can still enter the `lut` language directly in the markdown editor window." ]
2024-08-04T14:27:04
2024-08-06T06:59:23
2024-08-06T06:59:22
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options? ### Motivation I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents. ### Your contribution I can submit a pull request
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7087/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7087/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7086/comments
https://api.github.com/repos/huggingface/datasets/issues/7086/events
https://github.com/huggingface/datasets/issues/7086
2,445,516,829
I_kwDODunzps6Rw6Ad
7,086
load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4", "events_url": "https://api.github.com/users/tginart/events{/privacy}", "followers_url": "https://api.github.com/users/tginart/followers", "following_url": "https://api.github.com/users/tginart/following{/other_user}", "gists_url": "https://api.github.com/users/tginart/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tginart", "id": 11379648, "login": "tginart", "node_id": "MDQ6VXNlcjExMzc5NjQ4", "organizations_url": "https://api.github.com/users/tginart/orgs", "received_events_url": "https://api.github.com/users/tginart/received_events", "repos_url": "https://api.github.com/users/tginart/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tginart/subscriptions", "type": "User", "url": "https://api.github.com/users/tginart", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-08-02T18:12:23
2024-08-02T18:12:23
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this. ### Steps to reproduce the bug 1. Be Me 2. Run `load_dataset("TAUR-Lab/MuSR")` 3. Hit rate limit error 4. Dataset is in .cache/huggingface/datasets 5. ??? ### Expected behavior We should not run into API rate limits if we have cached the dataset ### Environment info datasets 2.16.0 python 3.10.4
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7086/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7086/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7085/comments
https://api.github.com/repos/huggingface/datasets/issues/7085/events
https://github.com/huggingface/datasets/issues/7085
2,440,008,618
I_kwDODunzps6Rb5Oq
7,085
[Regression] IterableDataset is broken on 2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4", "events_url": "https://api.github.com/users/AjayP13/events{/privacy}", "followers_url": "https://api.github.com/users/AjayP13/followers", "following_url": "https://api.github.com/users/AjayP13/following{/other_user}", "gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AjayP13", "id": 5404177, "login": "AjayP13", "node_id": "MDQ6VXNlcjU0MDQxNzc=", "organizations_url": "https://api.github.com/users/AjayP13/orgs", "received_events_url": "https://api.github.com/users/AjayP13/received_events", "repos_url": "https://api.github.com/users/AjayP13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions", "type": "User", "url": "https://api.github.com/users/AjayP13", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
[ "@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our tests failing in case it helps you figure out where this is coming from. I found it hard to reason through the resumable IterableDataset code though, so hopefully you have more intuition to implement a proper fix.", "I believe these lines in `TypedExamplesIterable` are responsible for stopping the re-iteration of `IterableDataset`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ebec2691fb1e40145429f63375cef3f46d3011ab/src/datasets/iterable_dataset.py#L1616-L1619\r\n\r\nIn contrast to other `Iterable`s, there is no check on whether `self._state_dict` is None or not. This particular case stands out and seems less straightforward to comprehend why. @lhoestq could you please assist us with this? Your help is much appreciated.", "Thanks for reporting for investigating - your assumption was correct @VeryLazyBoy !" ]
2024-07-31T13:01:59
2024-08-22T14:49:37
2024-08-22T14:49:07
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times. The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't. ### Steps to reproduce the bug Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`) ``` #!/bin/bash # List of dataset versions to test versions=("2.17.0" "2.20.0") # Loop through each version for version in "${versions[@]}"; do # Install the specific version of the datasets library pip3 install -q datasets=="$version" 2>/dev/null # Run the Python script python3 - <<EOF from datasets import IterableDataset from datasets.features.features import Features, Value def test_gen(): yield from [{"foo": i} for i in range(10)] features = Features([("foo", Value("int64"))]) d = IterableDataset.from_generator(test_gen, features=features) mapped = d.map(lambda row: {"foo": row["foo"] * 2}) column = mapped.select_columns(["foo"]) print("Version $version - Iterate Once:", list(column)) print("Version $version - Iterate Twice:", list(column)) EOF done ``` The output looks like this: ``` Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Twice: [] ``` ### Expected behavior The expected behavior is it version 2.20.0 should behave the same as 2.17.0. ### Environment info `datasets==2.20.0` on any platform.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7085/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7085/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7084/comments
https://api.github.com/repos/huggingface/datasets/issues/7084/events
https://github.com/huggingface/datasets/issues/7084
2,439,519,534
I_kwDODunzps6RaB0u
7,084
More easily support streaming local files
{ "avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4", "events_url": "https://api.github.com/users/fschlatt/events{/privacy}", "followers_url": "https://api.github.com/users/fschlatt/followers", "following_url": "https://api.github.com/users/fschlatt/following{/other_user}", "gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fschlatt", "id": 23191892, "login": "fschlatt", "node_id": "MDQ6VXNlcjIzMTkxODky", "organizations_url": "https://api.github.com/users/fschlatt/orgs", "received_events_url": "https://api.github.com/users/fschlatt/received_events", "repos_url": "https://api.github.com/users/fschlatt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions", "type": "User", "url": "https://api.github.com/users/fschlatt", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2024-07-31T09:03:15
2024-07-31T09:05:58
null
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files. ### Motivation I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`. Streaming the files locally does not work well for both file types for two different reasons. **Arrow files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue. **Parquet files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other". ### Your contribution I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added. IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7084/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7084/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7080/comments
https://api.github.com/repos/huggingface/datasets/issues/7080/events
https://github.com/huggingface/datasets/issues/7080
2,434,275,664
I_kwDODunzps6RGBlQ
7,080
Generating train split takes a long time
{ "avatar_url": "https://avatars.githubusercontent.com/u/35648800?v=4", "events_url": "https://api.github.com/users/alexanderswerdlow/events{/privacy}", "followers_url": "https://api.github.com/users/alexanderswerdlow/followers", "following_url": "https://api.github.com/users/alexanderswerdlow/following{/other_user}", "gists_url": "https://api.github.com/users/alexanderswerdlow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexanderswerdlow", "id": 35648800, "login": "alexanderswerdlow", "node_id": "MDQ6VXNlcjM1NjQ4ODAw", "organizations_url": "https://api.github.com/users/alexanderswerdlow/orgs", "received_events_url": "https://api.github.com/users/alexanderswerdlow/received_events", "repos_url": "https://api.github.com/users/alexanderswerdlow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexanderswerdlow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexanderswerdlow/subscriptions", "type": "User", "url": "https://api.github.com/users/alexanderswerdlow", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@alexanderswerdlow \r\nWhen no specific split is mentioned, the load_dataset library will load all available splits of the dataset. For example, if a dataset has \"train\" and \"test\" splits, the load_dataset function will load both into the DatasetDict object.\r\n\r\n![image](https://github.com/user-attachments/assets/379e6f57-7e1b-4cc3-bc36-dae3e878a51c)\r\n\r\n\r\nThe dataset PixArt-alpha/SAM-LLaVA-Captions10M may have been uploaded with different predefined splits (e.g., \"train\", \"test\", etc.), and by default, Hugging Face will load all splits unless you specifically request only one.\r\n\r\n### If you want to load only a specific split (e.g., only the \"train\" set), you can specify it in the split parameter like this:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"PixArt-alpha/SAM-LLaVA-Captions10M\", split=\"train\")\r\n```\r\n\r\n### You can also load multiple splits if needed:\r\n```python\r\ndataset = load_dataset(\"PixArt-alpha/SAM-LLaVA-Captions10M\", split=[\"train\", \"test\"])\r\n```\r\n\r\n", "@alexanderswerdlow, I will now work on this..\r\n\r\n## Idea:\r\nWhenever this code has ran:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"PixArt-alpha/SAM-LLaVA-Captions10M\")\r\n```\r\n\r\nIt should show all the splits of the datasets, and user has to choose which one should be loaded before generating a split like this,,\r\n\r\n![image](https://github.com/user-attachments/assets/8fbc604f-f0a5-4a59-a63e-aa4c26442c83)\r\n" ]
2024-07-29T01:42:43
2024-10-02T15:31:22
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Loading a simple webdataset takes ~45 minutes. ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M") ``` ### Expected behavior The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7080/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7080/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7079/comments
https://api.github.com/repos/huggingface/datasets/issues/7079/events
https://github.com/huggingface/datasets/issues/7079
2,433,363,298
I_kwDODunzps6RCi1i
7,079
HfHubHTTPError: 500 Server Error: Internal Server Error for url:
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "same issue here. @albertvillanova @lhoestq ", "Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_reuter_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_wp_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_essay_reduced\r\n\r\nOddly enough, the system status looks good: /static-proxy?url=https%3A%2F%2Fstatus.huggingface.co%2F", "Hey how to download these datasets using git cloning?", "Also reported here\r\nhttps://github.com/huggingface/huggingface_hub/issues/2425", "I have been getting the same error for the past 8 hours as well", "Same error since yesterday, fails on any new dataset created", "Same here. I cannot download the HelpSteer2 dataset: https://huggingface.co/datasets/nvidia/HelpSteer2 which has been uploaded about a month ago", "> Hey how to download these datasets using git cloning?\n\nYou'll find a guide [here](https://huggingface.co/docs/hub/en/datasets-downloading) 👍🏻", "Same here for imdb dataset", "It also happens with this dataset: https://huggingface.co/datasets/ylacombe/jenny-tts-6h-tagged", "same here for all datsets in the sentence-tramsformers repo and related collections.\r\n\r\nsame issue with dataset that i recently uploaded on my repo.\r\nseems that the upload date of the datset is not relevat (getting this issue with both old datasets and newer ones)\r\n\r\nfor some reason, i was able to get the dataset by turning it private and accessing it with the id token (accessing it as public while providing the token doesn not work)..... but i can say if that is just a random coincidence.\r\n\r\nseems not much deterministic, for a specific dataset (sentence-transformer nq ) , that was \"down\" since some hours , worked for like 5-10 minutes, then stopped again\r\n\r\nnow even this dataset (that worked since some min ago, and that i'm in the middle of processing steps) stopped working: _https://huggingface.co/datasets/bobox/msmarco-bm25-EduScore/_\r\n\r\nas already pointed out, there are no updates on **_/static-proxy?url=https%3A%2F%2Fstatus.huggingface.co%2F_**%5Cr%5Cn%5Cr%5Cn%5C%5Cn%5Cr%5Cn%5C%5Cn%5Cr%5Cn%5Cr%5Cnan example of the whole error message:\r\n``` \r\nHfHubHTTPError \r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\r\n 2592 \r\n 2593 # Create a dataset builder\r\n-> 2594 builder_instance = load_dataset_builder(\r\n 2595 path=path,\r\n 2596 name=name,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\r\n 2264 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 2265 download_config.storage_options.update(storage_options)\r\n-> 2266 dataset_module = dataset_module_factory(\r\n 2267 path,\r\n 2268 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1912 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1913 ) from None\r\n-> 1914 raise e1 from None\r\n 1915 else:\r\n 1916 raise FileNotFoundError(\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1832 hf_api = HfApi(config.HF_ENDPOINT)\r\n 1833 try:\r\n-> 1834 dataset_info = hf_api.dataset_info(\r\n 1835 repo_id=path,\r\n 1836 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs)\r\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n 113 \r\n--> 114 return fn(*args, **kwargs)\r\n 115 \r\n 116 return _inner_fn # type: ignore\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in dataset_info(self, repo_id, revision, timeout, files_metadata, token)\r\n 2362 \r\n 2363 r = get_session().get(path, headers=headers, timeout=timeout, params=params)\r\n-> 2364 hf_raise_for_status(r)\r\n 2365 data = r.json()\r\n 2366 return DatasetInfo(**data)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)\r\n 369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information\r\n 370 # as well (request id and/or server error message)\r\n--> 371 raise HfHubHTTPError(str(e), response=response) from e\r\n 372 \r\n 373 \r\n\r\nHfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/bobox/xSum-processed (Request ID: Root=1-66a527f0-756cfbc35cc466f075382289;7d5dc06a-37e9-4c22-874d-92b0b1023276)\r\n\r\nInternal Error - We're working hard to fix this as soon as possible!\r\n``` ", "we're working on a fix !", "We fixed the issue, you can load datasets again, sorry for the inconvenience !", "I can confirm, it's working now. I can load the dataset, yay. Thank you @lhoestq ", "@lhoestq thank you so much! ", "Hi I'm getting the same error with this [dataset](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) \r\nWorking on the course of stable diffusion , trying to run this [notebook](https://colab.research.google.com/github/huggingface/diffusion-models-class/blob/main/unit1/01_introduction_to_diffusers.ipynb#scrollTo=-yX-MZhSsxwp) \r\nthis is the error: \r\n`HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset/resolve/3cdedf844922ab40393d46d4c7f81c596e1c6d45/data/train-00000-of-00001.parquet (Request ID: Root=1-66ed3481-3393f4ab268b711440d31e02;c3ca2a7d-ae7b-4ba3-9947-9426711946a8)\r\n\r\nInternal Error - We're working hard to fix this as soon as possible!`\r\n\r\n", "Thanks for reporting, we are investigating !" ]
2024-07-27T08:21:03
2024-09-20T13:26:25
2024-07-27T19:52:30
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug newly uploaded datasets, since yesterday, yields an error. old datasets, works fine. Seems like the datasets api server returns a 500 I'm getting the same error, when I invoke `load_dataset` with my dataset. Long discussion about it here, but I'm not sure anyone from huggingface have seen it. /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fhfhubhttperror-500-server-error-internal-server-error-for-url%2F99580%2F1 ### Steps to reproduce the bug this api url: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 respond with: ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Expected behavior return no error with newer datasets. With older datasets I can load the datasets fine. ### Environment info # Browser When I access the api in the browser: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Request headers ``` Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8 Accept-Encoding gzip, deflate, br, zstd Accept-Language en-US,en;q=0.5 Connection keep-alive Host huggingface.co Priority u=1 Sec-Fetch-Dest document Sec-Fetch-Mode navigate Sec-Fetch-Site cross-site Upgrade-Insecure-Requests 1 User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0 ``` ### Response headers ``` X-Firefox-Spdy h2 access-control-allow-origin https://huggingface.co access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range content-length 80 content-type application/json; charset=utf-8 cross-origin-opener-policy same-origin date Fri, 26 Jul 2024 19:09:45 GMT etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c" referrer-policy strict-origin-when-cross-origin vary Origin via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront) x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ== x-amz-cf-pop CPH50-C1 x-cache Error from cloudfront x-error-message Internal Error - We're working hard to fix this as soon as possible! x-powered-by huggingface-moon x-request-id Root=1-66a3f479-026417465ef42f49349fdca1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 4, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/7079/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7079/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7077/comments
https://api.github.com/repos/huggingface/datasets/issues/7077/events
https://github.com/huggingface/datasets/issues/7077
2,432,345,489
I_kwDODunzps6Q-qWR
7,077
column_names ignored by load_dataset() when loading CSV file
{ "avatar_url": "https://avatars.githubusercontent.com/u/9130265?v=4", "events_url": "https://api.github.com/users/luismsgomes/events{/privacy}", "followers_url": "https://api.github.com/users/luismsgomes/followers", "following_url": "https://api.github.com/users/luismsgomes/following{/other_user}", "gists_url": "https://api.github.com/users/luismsgomes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/luismsgomes", "id": 9130265, "login": "luismsgomes", "node_id": "MDQ6VXNlcjkxMzAyNjU=", "organizations_url": "https://api.github.com/users/luismsgomes/orgs", "received_events_url": "https://api.github.com/users/luismsgomes/received_events", "repos_url": "https://api.github.com/users/luismsgomes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/luismsgomes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luismsgomes/subscriptions", "type": "User", "url": "https://api.github.com/users/luismsgomes", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug if you pass `names` instead of `column_names`." ]
2024-07-26T14:18:04
2024-07-30T07:52:26
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file. ### Steps to reproduce the bug Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg. ### Expected behavior The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - `huggingface_hub` version: 0.24.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7077/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7077/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7073/comments
https://api.github.com/repos/huggingface/datasets/issues/7073/events
https://github.com/huggingface/datasets/issues/7073
2,431,706,568
I_kwDODunzps6Q8OXI
7,073
CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Any recent change in the API backend rejecting parameter `revision=\"refs/pr/1\"` to `HfApi.preupload_lfs_files`?\r\n```\r\nf\"{endpoint}/api/{repo_type}s/{repo_id}/preupload/{revision}\"\r\n\r\n/static-proxy?url=https%3A%2F%2Fhub-ci.huggingface.co%2Fapi%2Fdatasets%2F__DUMMY_TRANSFORMERS_USER__%2Ftest-dataset-5188a8-17219154347516%2Fpreupload%2Frefs%252Fpr%252F1.%5Cr%5CnInvalid rev id: refs/pr/1\r\n```\r\n@Wauplin @huggingface/datasets @huggingface/moon-landing @huggingface/moon-landing-back ", "I have temporarily fixed the CI with:\r\n- #7074\r\n\r\nHowever, the underlying issue must be fixed and #7074 must be reverted.", "Hmm does it do the preupload call before creating the ref cc @Wauplin ?\r\n\r\n(in that case it should do a preupload call on the base branch with `?create_pr=1`)", "@coyotte508, the CI test was implemented 2 months ago and it was working OK until yesterday. See the CI status of the commits in the main branch of `datasets`: https://github.com/huggingface/datasets/commits/main/", "Yes i get that\r\n\r\nWe changed the preupload response to return the commit id in https://github.com/huggingface-internal/moon-landing/pull/10756\r\n\r\nThis line is probably causing the error: https://github.com/huggingface-internal/moon-landing/pull/10756/files#diff-558f6f9865e30bfa091b94d6a4a900138103ddb4eb0bec96b6deec5bf5626fa0R2322\r\n\r\nIt's weird the error is returned, it means that maybe a ref with 0 history (not even the first commit) was created\r\n\r\nDoes this change have any impact in production, or just the CI test? If it's just the CI test it should be fixed on your side, if it impacts production we can look at a solution", "@coyotte508 it impacts production: `convert_to_parquet` raises the above error when the dataset has more that one configs/subsets:\r\n- First subset calls `push_to_hub` with `create_pr=True`\r\n- Second subset uses the `refs/pr/#` returned by the call above, and calls `push_to_hub` with `revision=\"refs/pr/#\"`", "I tried removing the `mock_commit` call: https://github.com/huggingface/datasets/pull/7076\r\n\r\nAnd the tests seem to work.\r\n\r\nSo it's probably because the commit is not actually called, it doesn't actually create the pull request on the remote (and the associated `refs/pr/1`). But the `preupload` call is not mocked.\r\n\r\nAnyway it shouldn't impact production, since production isn't mocked", "@coyotte508 thanks a lot for the investigation and sorry for the noise. \r\nI promise not trying to fix things when I have a slight fever: my head does not work well.\r\n\r\nWe need indeed to mock `preupload_lfs_files`: before it was not necessary, but now it is.", "I fixed the test in:\r\n- #7078\r\n\r\nThanks again, @coyotte508." ]
2024-07-26T08:27:41
2024-07-27T05:48:02
2024-07-26T09:16:13
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756 ``` FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64) Revision Not Found for url: /static-proxy?url=https%3A%2F%2Fhub-ci.huggingface.co%2Fapi%2Fdatasets%2F__DUMMY_TRANSFORMERS_USER__%2Ftest-dataset-5188a8-17219154347516%2Fpreupload%2Frefs%252Fpr%252F1. Invalid rev id: refs/pr/1 ``` ``` /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet dataset.push_to_hub( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub api.preupload_lfs_files( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files _fetch_upload_modes( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn return fn(*args, **kwargs) /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes hf_raise_for_status(resp) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7073/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7073/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7072/comments
https://api.github.com/repos/huggingface/datasets/issues/7072/events
https://github.com/huggingface/datasets/issues/7072
2,430,577,916
I_kwDODunzps6Q36z8
7,072
nm
{ "avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4", "events_url": "https://api.github.com/users/brettdavies/events{/privacy}", "followers_url": "https://api.github.com/users/brettdavies/followers", "following_url": "https://api.github.com/users/brettdavies/following{/other_user}", "gists_url": "https://api.github.com/users/brettdavies/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brettdavies", "id": 26392883, "login": "brettdavies", "node_id": "MDQ6VXNlcjI2MzkyODgz", "organizations_url": "https://api.github.com/users/brettdavies/orgs", "received_events_url": "https://api.github.com/users/brettdavies/received_events", "repos_url": "https://api.github.com/users/brettdavies/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brettdavies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brettdavies/subscriptions", "type": "User", "url": "https://api.github.com/users/brettdavies", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2024-07-25T17:03:24
2024-07-25T20:36:11
2024-07-25T20:36:11
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4", "events_url": "https://api.github.com/users/brettdavies/events{/privacy}", "followers_url": "https://api.github.com/users/brettdavies/followers", "following_url": "https://api.github.com/users/brettdavies/following{/other_user}", "gists_url": "https://api.github.com/users/brettdavies/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brettdavies", "id": 26392883, "login": "brettdavies", "node_id": "MDQ6VXNlcjI2MzkyODgz", "organizations_url": "https://api.github.com/users/brettdavies/orgs", "received_events_url": "https://api.github.com/users/brettdavies/received_events", "repos_url": "https://api.github.com/users/brettdavies/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brettdavies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brettdavies/subscriptions", "type": "User", "url": "https://api.github.com/users/brettdavies", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7072/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7072/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7071/comments
https://api.github.com/repos/huggingface/datasets/issues/7071/events
https://github.com/huggingface/datasets/issues/7071
2,430,313,011
I_kwDODunzps6Q26Iz
7,071
Filter hangs
{ "avatar_url": "https://avatars.githubusercontent.com/u/61711045?v=4", "events_url": "https://api.github.com/users/lucienwalewski/events{/privacy}", "followers_url": "https://api.github.com/users/lucienwalewski/followers", "following_url": "https://api.github.com/users/lucienwalewski/following{/other_user}", "gists_url": "https://api.github.com/users/lucienwalewski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucienwalewski", "id": 61711045, "login": "lucienwalewski", "node_id": "MDQ6VXNlcjYxNzExMDQ1", "organizations_url": "https://api.github.com/users/lucienwalewski/orgs", "received_events_url": "https://api.github.com/users/lucienwalewski/received_events", "repos_url": "https://api.github.com/users/lucienwalewski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucienwalewski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucienwalewski/subscriptions", "type": "User", "url": "https://api.github.com/users/lucienwalewski", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-07-25T15:29:05
2024-07-25T15:36:59
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('lcolonn/patfig', split='test') ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y') ``` Eventually I ctrl+C and I obtain this stack trace: ``` >>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y') Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper out = func(dataset, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter indices = self.map( ^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single batch = apply_function_on_filtered_inputs( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function num_examples = len(batch[next(iter(batch.keys()))]) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__ value = self.format(key) ^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format return self.formatter.format_column(self.pa_table.select([key])) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column return self.features.decode_column(column, column_name) if self.features else column ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors ^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load n, err_code = decoder.decode(b) ^^^^^^^^^^^^^^^^^ KeyboardInterrupt ``` Warning! This can even seem to cause some computers to crash. ### Expected behavior Should return the filtered dataset ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.24.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7071/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7071/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7070/comments
https://api.github.com/repos/huggingface/datasets/issues/7070/events
https://github.com/huggingface/datasets/issues/7070
2,430,285,235
I_kwDODunzps6Q2zWz
7,070
how set_transform affects batch size?
{ "avatar_url": "https://avatars.githubusercontent.com/u/103993288?v=4", "events_url": "https://api.github.com/users/VafaKnm/events{/privacy}", "followers_url": "https://api.github.com/users/VafaKnm/followers", "following_url": "https://api.github.com/users/VafaKnm/following{/other_user}", "gists_url": "https://api.github.com/users/VafaKnm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VafaKnm", "id": 103993288, "login": "VafaKnm", "node_id": "U_kgDOBjLPyA", "organizations_url": "https://api.github.com/users/VafaKnm/orgs", "received_events_url": "https://api.github.com/users/VafaKnm/received_events", "repos_url": "https://api.github.com/users/VafaKnm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VafaKnm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VafaKnm/subscriptions", "type": "User", "url": "https://api.github.com/users/VafaKnm", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-07-25T15:19:34
2024-07-25T15:19:34
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this: ``` def prepare_dataset(batch): input_features = processor(batch["audio"], sampling_rate=16000).input_features[0] input_length = len(input_features) labels = processor.tokenizer(batch["text"], padding=False).input_ids batch = { "input_features": [input_features], "input_length": [input_length], "labels": [labels] } return batch train_ds.set_transform(prepare_dataset) val_ds.set_transform(prepare_dataset) ``` After this, I also had to change the DataCollatorCTCWithPadding class like this: ``` @dataclass class DataCollatorCTCWithPadding: processor: Wav2Vec2BertProcessor padding: Union[bool, str] = True def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # Separate input_features and labels input_features = [{"input_features": feature["input_features"][0]} for feature in features] labels = [feature["labels"][0] for feature in features] # Pad input features batch = self.processor.pad( input_features, padding=self.padding, return_tensors="pt", ) # Pad and process labels label_features = self.processor.tokenizer.pad( {"input_ids": labels}, padding=self.padding, return_tensors="pt", ) labels = label_features["input_ids"] attention_mask = label_features["attention_mask"] # Replace padding with -100 to ignore these tokens during loss calculation labels = labels.masked_fill(attention_mask.ne(1), -100) batch["labels"] = labels return batch ``` But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake? ### Steps to reproduce the bug i can share my code if needed ### Expected behavior Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch. ### Environment info all updated versions
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7070/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7070/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7067/comments
https://api.github.com/repos/huggingface/datasets/issues/7067/events
https://github.com/huggingface/datasets/issues/7067
2,425,460,168
I_kwDODunzps6QkZXI
7,067
Convert_to_parquet fails for datasets with multiple configs
{ "avatar_url": "https://avatars.githubusercontent.com/u/97585031?v=4", "events_url": "https://api.github.com/users/HuangZhen02/events{/privacy}", "followers_url": "https://api.github.com/users/HuangZhen02/followers", "following_url": "https://api.github.com/users/HuangZhen02/following{/other_user}", "gists_url": "https://api.github.com/users/HuangZhen02/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HuangZhen02", "id": 97585031, "login": "HuangZhen02", "node_id": "U_kgDOBdEHhw", "organizations_url": "https://api.github.com/users/HuangZhen02/orgs", "received_events_url": "https://api.github.com/users/HuangZhen02/received_events", "repos_url": "https://api.github.com/users/HuangZhen02/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HuangZhen02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HuangZhen02/subscriptions", "type": "User", "url": "https://api.github.com/users/HuangZhen02", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Many users have encountered the same issue, which has caused inconvenience.\r\n\r\n/static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fconvert-to-parquet-fails-for-datasets-with-multiple-configs%2F86733%5Cr%5Cn", "Thanks for reporting.\r\n\r\nI will make the code more robust.", "I have opened an issue in the huggingface-hub repo:\r\n- https://github.com/huggingface/huggingface_hub/issues/2419\r\n\r\nI am opening a PR to avoid calling `create_branch` if the branch already exists." ]
2024-07-23T15:09:33
2024-07-30T10:51:02
2024-07-30T10:51:02
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error: ``` Traceback (most recent call last): File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run dataset.push_to_hub( File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f) Bad request: Invalid reference for a branch: refs/pr/1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7067/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7067/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7066/comments
https://api.github.com/repos/huggingface/datasets/issues/7066/events
https://github.com/huggingface/datasets/issues/7066
2,425,125,160
I_kwDODunzps6QjHko
7,066
One subset per file in repo ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-07-23T12:43:59
2024-07-23T12:43:59
null
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Right now we consider all the files of a dataset to be the same data, e.g. ``` single_subset_dataset/ ├── train0.jsonl ├── train1.jsonl └── train2.jsonl ``` but in cases like this, each file is actually a different subset of the dataset and should be loaded separately ``` many_subsets_dataset/ ├── animals.jsonl ├── trees.jsonl └── metadata.jsonl ``` It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ?
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7066/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7066/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7065/comments
https://api.github.com/repos/huggingface/datasets/issues/7065/events
https://github.com/huggingface/datasets/issues/7065
2,424,734,953
I_kwDODunzps6QhoTp
7,065
Cannot get item after loading from disk and then converting to iterable.
{ "avatar_url": "https://avatars.githubusercontent.com/u/21305646?v=4", "events_url": "https://api.github.com/users/happyTonakai/events{/privacy}", "followers_url": "https://api.github.com/users/happyTonakai/followers", "following_url": "https://api.github.com/users/happyTonakai/following{/other_user}", "gists_url": "https://api.github.com/users/happyTonakai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/happyTonakai", "id": 21305646, "login": "happyTonakai", "node_id": "MDQ6VXNlcjIxMzA1NjQ2", "organizations_url": "https://api.github.com/users/happyTonakai/orgs", "received_events_url": "https://api.github.com/users/happyTonakai/received_events", "repos_url": "https://api.github.com/users/happyTonakai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/happyTonakai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/happyTonakai/subscriptions", "type": "User", "url": "https://api.github.com/users/happyTonakai", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-07-23T09:37:56
2024-07-23T09:37:56
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug The dataset generated from local file works fine. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` But after saving it to disk and then loading it from disk, I cannot get data as expected. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ds.save_to_disk("./train") ds = datasets.load_from_disk("./train") ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` After a long time waiting, an error occurs: ``` Loading dataset from disk: 100%|█████████████████████████████████████████████████████████████████████████| 165/165 [00:00<00:00, 6422.18it/s] Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get if not self._poll(timeout): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll return self._poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll r = wait([self], timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait ready = selector.select(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select fd_event_list = self._selector.poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module> cli.main() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main run() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file runpy.run_path(target, run_name="__main__") File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path return _run_module_code(code, init_globals, run_name, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code exec(code, run_globals) File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module> for batch in dataloader: File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__ data = self._next_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data idx, data = self._get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data success, data = self._try_get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly ``` It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable? ### Steps to reproduce the bug 1. Create a `Dataset` from local files with `from_dict` 2. Save it to disk with `save_to_disk` 3. Load it from disk with `load_from_disk` 4. Convert to iterable with `to_iterable_dataset` 5. Loop the dataset ### Expected behavior Get items faster than the original dataset generated from dict. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.23.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7065/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7065/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7063/comments
https://api.github.com/repos/huggingface/datasets/issues/7063/events
https://github.com/huggingface/datasets/issues/7063
2,424,488,648
I_kwDODunzps6QgsLI
7,063
Add `batch` method to `Dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lappemic", "id": 61876623, "login": "lappemic", "node_id": "MDQ6VXNlcjYxODc2NjIz", "organizations_url": "https://api.github.com/users/lappemic/orgs", "received_events_url": "https://api.github.com/users/lappemic/received_events", "repos_url": "https://api.github.com/users/lappemic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "type": "User", "url": "https://api.github.com/users/lappemic", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[]
2024-07-23T07:36:59
2024-07-25T13:45:21
2024-07-25T13:45:21
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054. ### Motivation A batched iteration speeds up data loading significantly (see e.g. #6279) ### Your contribution I plan to open a PR to implement this.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7063/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7063/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7061/comments
https://api.github.com/repos/huggingface/datasets/issues/7061/events
https://github.com/huggingface/datasets/issues/7061
2,423,786,881
I_kwDODunzps6QeA2B
7,061
Custom Dataset | Still Raise Error while handling errors in _generate_examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4", "events_url": "https://api.github.com/users/hahmad2008/events{/privacy}", "followers_url": "https://api.github.com/users/hahmad2008/followers", "following_url": "https://api.github.com/users/hahmad2008/following{/other_user}", "gists_url": "https://api.github.com/users/hahmad2008/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hahmad2008", "id": 68266028, "login": "hahmad2008", "node_id": "MDQ6VXNlcjY4MjY2MDI4", "organizations_url": "https://api.github.com/users/hahmad2008/orgs", "received_events_url": "https://api.github.com/users/hahmad2008/received_events", "repos_url": "https://api.github.com/users/hahmad2008/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hahmad2008/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hahmad2008/subscriptions", "type": "User", "url": "https://api.github.com/users/hahmad2008", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-07-22T21:18:12
2024-09-09T14:48:07
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I follow this [example](/static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Ferror-handling-in-iterabledataset%2F72827%2F3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution. ``` def _generate_examples(self, filepaths): errors=[] id_ = 0 for filepath in filepaths: try: with open(filepath, 'r') as f: for line in f: json_obj = json.loads(line) yield id_, json_obj id_ += 1 except Exception as exc: logger.error(f"error occur at filepath: {filepath}") errors.append(error) ``` seems the logger.error is printed but still exception is raised the the run is exit. ``` Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841 ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl Traceback (most recent call last): File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples json_obj = json.loads(line) File "myenv/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "myenv/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3) Generating train split: 0 examples [00:06, ? examples/s]> RemoteTraceback: """ Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset """ The above exception was the direct cause of the following exception: │ │ │ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │ │ py:1377 in <listcomp> │ │ │ │ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │ │ 1375 │ │ │ │ │ break │ │ 1376 │ │ # we get the result in case there's an error to raise │ │ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │ │ 1378 │ │ │ │ ╭──────────────────────────────── locals ─────────────────────────────────╮ │ │ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │ │ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │ │ ╰─────────────────────────────────────────────────────────────────────────╯ │ │ │ │ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │ │ in get │ │ │ │ 768 │ │ if self._success: │ │ 769 │ │ │ return self._value │ │ 770 │ │ else: │ │ ❱ 771 │ │ │ raise self._value │ │ 772 │ │ │ 773 │ def _set(self, i, obj): │ │ 774 │ │ self._success, self._value = obj │ │ │ │ ╭────────────────────────────── locals ──────────────────────────────╮ │ │ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │ │ │ timeout = None │ │ │ ╰────────────────────────────────────────────────────────────────────╯ │ DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug same as above ### Expected behavior should handle error and continue reading remaining files ### Environment info python 3.9
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7061/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7059/comments
https://api.github.com/repos/huggingface/datasets/issues/7059/events
https://github.com/huggingface/datasets/issues/7059
2,422,827,892
I_kwDODunzps6QaWt0
7,059
None values are skipped when reading jsonl in subobjects
{ "avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4", "events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}", "followers_url": "https://api.github.com/users/PonteIneptique/followers", "following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}", "gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PonteIneptique", "id": 1929830, "login": "PonteIneptique", "node_id": "MDQ6VXNlcjE5Mjk4MzA=", "organizations_url": "https://api.github.com/users/PonteIneptique/orgs", "received_events_url": "https://api.github.com/users/PonteIneptique/received_events", "repos_url": "https://api.github.com/users/PonteIneptique/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions", "type": "User", "url": "https://api.github.com/users/PonteIneptique", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-07-22T13:02:42
2024-07-22T13:02:53
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I have been fighting against my machine since this morning only to find out this is some kind of a bug. When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around. E.g., let's take this example Here are two version of a same dataset: [not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz) [buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz) ### Steps to reproduce the bug 1. Load the `buggy.tar.gz` dataset 2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` 3. Load the `not-buggy.tar.gz` dataset 4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` ### Expected behavior Both should have 4 baseline entries: 1. Buggy should have None followed by three lists 2. Non-Buggy should have four lists, and the first one should be an empty list. One does not work, 2 works. Despite accepting None in another position than the first one. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7059/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7059/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7058/comments
https://api.github.com/repos/huggingface/datasets/issues/7058/events
https://github.com/huggingface/datasets/issues/7058
2,422,560,355
I_kwDODunzps6QZVZj
7,058
New feature type: Document
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-07-22T10:49:20
2024-07-22T10:49:20
null
COLLABORATOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
It would be useful for PDF. https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7058/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7058/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7055/comments
https://api.github.com/repos/huggingface/datasets/issues/7055/events
https://github.com/huggingface/datasets/issues/7055
2,421,708,891
I_kwDODunzps6QWFhb
7,055
WebDataset with different prefixes are unsupported
{ "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hlky", "id": 106811348, "login": "hlky", "node_id": "U_kgDOBl3P1A", "organizations_url": "https://api.github.com/users/hlky/orgs", "received_events_url": "https://api.github.com/users/hlky/received_events", "repos_url": "https://api.github.com/users/hlky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "type": "User", "url": "https://api.github.com/users/hlky", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Since `datasets` uses is built on Arrow to store the data, it requires each sample to have the same columns.\r\n\r\nThis can be fixed by specifyign in advance the name of all the possible columns in the `dataset_info` in YAML, and missing values will be `None`", "Thanks. This currently doesn't work for WebDataset because there's no `BuilderConfig` with `features` and in turn `_info` is missing `features=self.config.features`. I'll prepare a PR to fix this.\r\n\r\nNote it may be useful to add the [expected format of `features`](https://github.com/huggingface/datasets/blob/16fa4421f44b22bbbc607f379a93f45af468d1fc/src/datasets/features/features.py#L1757) to the documentation for [`Builder Parameters`](https://huggingface.co/docs/datasets/repository_structure#builder-parameters).\r\n", "Oh good catch ! thanks\r\n\r\n> Note it may be useful to add the [expected format of features](https://github.com/huggingface/datasets/blob/16fa4421f44b22bbbc607f379a93f45af468d1fc/src/datasets/features/features.py#L1757) to the documentation for [Buil](https://huggingface.co/docs/datasets/repository_structure#builder-parameters)\r\n\r\nGood idea, let me open a PR", "#7060 ", "Actually I just tried with `datasets` on the `main` branch and having `features` defined in `dataset_info` worked for me\r\n\r\n```python\r\n>>> list(load_dataset(\"/Users/quentinlhoest/tmp\", streaming=True, split=\"train\"))\r\n[{'txt': 'hello there\\n', 'other': None}]\r\n```\r\nwhere `tmp` contains data.tar with \"hello there\\n\" in a text file and the README.md:\r\n```\r\n---\r\ndataset_info:\r\n features:\r\n - name: txt\r\n dtype: string\r\n - name: other\r\n dtype: string\r\n---\r\n\r\nThis is a dataset card\r\n```\r\n\r\nWhat error did you get when you tried to specify the columns in `dataset_info` ?", "If you review the changes in #7060 you'll note that `features` are not passed to `DatasetInfo`.\r\n\r\nIn your case the features are being extracted by [this code](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/webdataset/webdataset.py#L72-L98).\r\n\r\nTry with the `Steps to reproduce the bug`. It's the same error mentioned in `Describe the bug` because `features` are not passed to `DatasetInfo`.\r\n\r\n`features` are [not used](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/builder.py#L365-L366) when the `BuilderConfig` has no `features` attribute. `WebDataset` uses the default [`BuilderConfig`](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/builder.py#L101-L124).\r\n\r\nThere is a [warning](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/load.py#L640-L648) that `features` are ignored.\r\n\r\nNote that as mentioned in `Describe the bug` this could also be resolved by removing the check [here](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) because Arrow actually handles this itself, Arrow sets any missing fields to `None`, at least in my case.", "Note for anyone else who encounters this issue, every dataset type except folder-based types supported features in the [documented](https://huggingface.co/docs/datasets/repository_structure#builder-parameters) manner; [Arrow](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/arrow/arrow.py#L15-L21), [csv](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/csv/csv.py#L25-L68), [generator](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/generator/generator.py#L8-L19), [json](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/json/json.py#L42-L52), [pandas](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/pandas/pandas.py#L14-L20), [parquet](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/parquet/parquet.py#L16-L24), [spark](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/spark/spark.py#L31-L37), [sql](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/sql/sql.py#L24-L35) and [text](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/text/text.py#L18-L27). `WebDataset` is different and requires [`dataset_info` which is vaguely documented](https://huggingface.co/docs/datasets/dataset_script#optional-generate-dataset-metadata) under dataset loading scripts.", "Thanks for explaining. I see the Dataset Viewer is still failing - I'll update `datasets` in the Viewer to fix this" ]
2024-07-22T01:14:19
2024-07-24T13:26:30
2024-07-23T13:28:46
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k) Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given. ``` The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. ``` The purpose of this check is unclear because PyArrow supports different keys. Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset. ``` >>> from datasets import load_dataset >>> path = "shards/*.tar" >>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True) Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 152/152 [00:00<00:00, 56458.93it/s] >>> dataset IterableDataset({ features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'], n_shards: 152 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("bigdata-pw/fashion-150k") ``` ### Expected behavior Dataset loads without error ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.19 - `huggingface_hub` version: 0.23.4 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hlky", "id": 106811348, "login": "hlky", "node_id": "U_kgDOBl3P1A", "organizations_url": "https://api.github.com/users/hlky/orgs", "received_events_url": "https://api.github.com/users/hlky/received_events", "repos_url": "https://api.github.com/users/hlky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "type": "User", "url": "https://api.github.com/users/hlky", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7055/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7055/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7053/comments
https://api.github.com/repos/huggingface/datasets/issues/7053/events
https://github.com/huggingface/datasets/issues/7053
2,416,423,791
I_kwDODunzps6QB7Nv
7,053
Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple`
{ "avatar_url": "https://avatars.githubusercontent.com/u/48289218?v=4", "events_url": "https://api.github.com/users/MatthewYZhang/events{/privacy}", "followers_url": "https://api.github.com/users/MatthewYZhang/followers", "following_url": "https://api.github.com/users/MatthewYZhang/following{/other_user}", "gists_url": "https://api.github.com/users/MatthewYZhang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MatthewYZhang", "id": 48289218, "login": "MatthewYZhang", "node_id": "MDQ6VXNlcjQ4Mjg5MjE4", "organizations_url": "https://api.github.com/users/MatthewYZhang/orgs", "received_events_url": "https://api.github.com/users/MatthewYZhang/received_events", "repos_url": "https://api.github.com/users/MatthewYZhang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MatthewYZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatthewYZhang/subscriptions", "type": "User", "url": "https://api.github.com/users/MatthewYZhang", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Hi,\r\n\r\nThis issue was fixed in `datasets` 2.15.0:\r\n- #6105\r\n\r\nYou will need to update your `datasets`:\r\n```\r\npip install -U datasets\r\n```", "Duplicate of:\r\n- #6100" ]
2024-07-18T13:42:35
2024-07-18T15:17:42
2024-07-18T15:16:18
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug in data_files.py, line 332, `fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)` If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')` So, `isinstance(fs.protocol, str) == False` and `protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise `TypeError: can only concatenate tuple (not "str") to tuple`. ### Steps to reproduce the bug Steps to reproduce: 1. Run on a cloud server like AWS, 2. `import datasets.data_files as datafile` 3. datafile.resolve_pattern('path/to/dataset', '.') 4. `TypeError: can only concatenate tuple (not "str") to tuple` ### Expected behavior Should return path of the dataset, with fs.protocol at the beginning ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.19 - Huggingface_hub version: 0.23.5 - PyArrow version: 16.1.0 - Pandas version: 1.1.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7053/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7053/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7051/comments
https://api.github.com/repos/huggingface/datasets/issues/7051/events
https://github.com/huggingface/datasets/issues/7051
2,409,353,929
I_kwDODunzps6Pm9LJ
7,051
How to set_epoch with interleave_datasets?
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "This is not possible right now afaik :/\r\n\r\nMaybe we could have something like this ? wdyt ?\r\n\r\n```python\r\nds = interleave_datasets(\r\n [shuffled_dataset_a, dataset_b],\r\n probabilities=probabilities,\r\n stopping_strategy='all_exhausted',\r\n reshuffle_each_iteration=True,\r\n)", "That would be helpful for this case! \r\n\r\nIf there was some way for from_generator to iterate over just a single shard of some dataset that would probably be more ideal. Maybe something like\r\n\r\n```\r\ndef from_dataset_generator(dataset, generator_fn, gen_kwargs):\r\n # calls generator_fn(dataset=dataset_shard, **gen_kwargs)\r\n```\r\n\r\nAnother transform I was trying to implement is an input bucketing transform. Essentially you need to iterate through a dataset and reorder the examples in them, which is not really possible with a `map()` call. But using `from_generator()` causes the final dataset to be a single shard and loses speed gains from multiple dataloader workers", "I see, there are some internal functions to get a single shard already but the public `.shard()` method hasn't been implemented yet for `IterableDataset` :/\r\n\r\n(see the use of `ex_iterable.shard_data_sources` in `IterableDataset._prepare_ex_iterable_for_iteration` for example)", "Would that be something planned on the roadmap for the near future, or do you suggest hacking through with internal APIs for now?", "Ok this turned out to be not too difficult. Are there any obvious issues with my implementation?\r\n\r\n```\r\nclass ShuffleEveryEpochIterable(iterable_dataset._BaseExamplesIterable):\r\n \"\"\"ExamplesIterable that reshuffles the dataset every epoch.\"\"\"\r\n\r\n def __init__(\r\n self,\r\n ex_iterable: iterable_dataset._BaseExamplesIterable,\r\n generator: np.random.Generator,\r\n ):\r\n \"\"\"Constructor.\"\"\"\r\n super().__init__()\r\n self.ex_iterable = ex_iterable\r\n self.generator = generator\r\n\r\n def _init_state_dict(self) -> dict:\r\n self._state_dict = {\r\n 'ex_iterable': self.ex_iterable._init_state_dict(),\r\n 'epoch': 0,\r\n }\r\n return self._state_dict\r\n\r\n @typing.override\r\n def __iter__(self):\r\n epoch = self._state_dict['epoch'] if self._state_dict else 0\r\n for i in itertools.count(epoch):\r\n # Create effective seed using i (subtract in order to avoir overflow in long_scalars)\r\n effective_seed = copy.deepcopy(self.generator).integers(0, 1 << 63) - i\r\n effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed\r\n generator = np.random.default_rng(effective_seed)\r\n self.ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\r\n if self._state_dict:\r\n self._state_dict['epoch'] = i\r\n self._state_dict['ex_iterable'] = self.ex_iterable._init_state_dict()\r\n it = iter(self.ex_iterable)\r\n yield from it\r\n\r\n @typing.override\r\n def shuffle_data_sources(self, generator):\r\n ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\r\n return ShuffleEveryEpochIterable(ex_iterable, generator=generator)\r\n\r\n @typing.override\r\n def shard_data_sources(self, worker_id: int, num_workers: int):\r\n ex_iterable = self.ex_iterable.shard_data_sources(worker_id, num_workers)\r\n return ShuffleEveryEpochIterable(ex_iterable, generator=self.generator)\r\n\r\n @typing.override\r\n @property\r\n def n_shards(self) -> int:\r\n return self.ex_iterable.n_shards\r\n \r\ngenerator = np.random.default_rng(seed)\r\nshuffling = iterable_dataset.ShufflingConfig(generator=generator, _original_seed=seed)\r\nex_iterable = iterable_dataset.BufferShuffledExamplesIterable(\r\n dataset._ex_iterable, buffer_size=buffer_size, generator=generator\r\n)\r\nex_iterable = ShuffleEveryEpochIterable(ex_iterable, generator=generator)\r\ndataset = datasets.IterableDataset(\r\n ex_iterable=ex_iterable,\r\n info=dataset._info.copy(),\r\n split=dataset._split,\r\n formatting=dataset._formatting,\r\n shuffling=shuffling,\r\n distributed=copy.deepcopy(dataset._distributed),\r\n token_per_repo_id=dataset._token_per_repo_id,\r\n)\r\n```\r\n", "Nice ! This iterable is infinite though no ? How would `interleave_dataset` know when to stop ?\r\n\r\nMaybe the re-shuffling can be implemented directly in `RandomlyCyclingMultiSourcesExamplesIterable` (which is the iterable used by `interleave_dataset`) ?", "Infinite is fine for my usecases fortunately." ]
2024-07-15T18:24:52
2024-08-05T20:58:04
2024-08-05T20:58:04
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples. I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch) Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start. How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset... Something like ``` dataset_a = load_dataset(...) dataset_b = load_dataset(...) def epoch_shuffled_dataset(ds): # How to make this maintain the number of shards in ds?? for epoch in itertools.count(): ds.set_epoch(epoch) yield from iter(ds) shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a}) interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 2, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7051/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7051/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7049/comments
https://api.github.com/repos/huggingface/datasets/issues/7049/events
https://github.com/huggingface/datasets/issues/7049
2,408,514,366
I_kwDODunzps6PjwM-
7,049
Save nparray as list
{ "avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4", "events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}", "followers_url": "https://api.github.com/users/Sakurakdx/followers", "following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}", "gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sakurakdx", "id": 48399040, "login": "Sakurakdx", "node_id": "MDQ6VXNlcjQ4Mzk5MDQw", "organizations_url": "https://api.github.com/users/Sakurakdx/orgs", "received_events_url": "https://api.github.com/users/Sakurakdx/received_events", "repos_url": "https://api.github.com/users/Sakurakdx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions", "type": "User", "url": "https://api.github.com/users/Sakurakdx", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "In addition, when I use `set_format ` and index the ds, the following error occurs:\r\nthe code\r\n```python\r\nds.set_format(type=\"np\", colums=\"pixel_values\")\r\n```\r\nerror\r\n<img width=\"918\" alt=\"image\" src=\"https://github.com/user-attachments/assets/b28bbff2-20ea-4d28-ab62-b4ed2d944996\">\r\n", "> Some people use the set_format function to convert the column back, but doesn't this lose precision?\r\n\r\nUnder the hood the data is saved in Arrow format using the same precision as your numpy arrays?\r\nBy default the Arrow data is read as python lists, but you can indeed read them back as numpy arrays with the same precision", "(you can fix your second issue by fixing the typo `colums` -> `columns`)", "> (you can fix your second issue by fixing the typo `colums` -> `columns`)\r\n\r\nYou are right, I was careless. Thank you.", "> > Some people use the set_format function to convert the column back, but doesn't this lose precision?\r\n> \r\n> Under the hood the data is saved in Arrow format using the same precision as your numpy arrays? By default the Arrow data is read as python lists, but you can indeed read them back as numpy arrays with the same precision\r\n\r\nYes, after testing I found that there was no loss of precision. Thanks again for your answer." ]
2024-07-15T11:36:11
2024-07-18T11:33:34
2024-07-18T11:33:34
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision? ### Steps to reproduce the bug the map function ```python def convert_image_to_features(inst, processor, image_dir): image_file = inst["image_url"] file = image_file.split("/")[-1] image_path = os.path.join(image_dir, file) image = Image.open(image_path) image = image.convert("RGBA") inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"] return inst ``` main function ```python map_fun = partial( convert_image_to_features, processor=processor, image_dir=image_dir ) ds = ds.map(map_fun, batched=False, num_proc=20) print(type(ds[0]["pixel_values"]) ``` ### Expected behavior (type < list>) ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.23.4 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4", "events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}", "followers_url": "https://api.github.com/users/Sakurakdx/followers", "following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}", "gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sakurakdx", "id": 48399040, "login": "Sakurakdx", "node_id": "MDQ6VXNlcjQ4Mzk5MDQw", "organizations_url": "https://api.github.com/users/Sakurakdx/orgs", "received_events_url": "https://api.github.com/users/Sakurakdx/received_events", "repos_url": "https://api.github.com/users/Sakurakdx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions", "type": "User", "url": "https://api.github.com/users/Sakurakdx", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7049/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7049/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7048/comments
https://api.github.com/repos/huggingface/datasets/issues/7048/events
https://github.com/huggingface/datasets/issues/7048
2,408,487,547
I_kwDODunzps6Pjpp7
7,048
ImportError: numpy.core.multiarray when using `filter`
{ "avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4", "events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}", "followers_url": "https://api.github.com/users/kamilakesbi/followers", "following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}", "gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kamilakesbi", "id": 45195979, "login": "kamilakesbi", "node_id": "MDQ6VXNlcjQ1MTk1OTc5", "organizations_url": "https://api.github.com/users/kamilakesbi/orgs", "received_events_url": "https://api.github.com/users/kamilakesbi/received_events", "repos_url": "https://api.github.com/users/kamilakesbi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions", "type": "User", "url": "https://api.github.com/users/kamilakesbi", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Could you please check your `numpy` version?", "I got this issue while using numpy version 2.0. \r\n\r\nI solved it by switching back to numpy 1.26.0 :) ", "We recently added support for numpy 2.0, but it is not released yet.", "Ok I see, thanks! I think we can close this issue for now as switching back to version 1.26.0 solves the problem :) " ]
2024-07-15T11:21:04
2024-07-16T10:11:25
2024-07-16T10:11:25
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I can't apply the filter method on my dataset. ### Steps to reproduce the bug The following snippet generates a bug: ```python from datasets import load_dataset ami = load_dataset('kamilakesbi/ami', 'ihm') ami['train'].filter( lambda example: example["file_name"] == 'EN2001a' ) ``` I get the following error: `ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).` ### Expected behavior It should work properly! ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4", "events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}", "followers_url": "https://api.github.com/users/kamilakesbi/followers", "following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}", "gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kamilakesbi", "id": 45195979, "login": "kamilakesbi", "node_id": "MDQ6VXNlcjQ1MTk1OTc5", "organizations_url": "https://api.github.com/users/kamilakesbi/orgs", "received_events_url": "https://api.github.com/users/kamilakesbi/received_events", "repos_url": "https://api.github.com/users/kamilakesbi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions", "type": "User", "url": "https://api.github.com/users/kamilakesbi", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7048/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7048/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7047/comments
https://api.github.com/repos/huggingface/datasets/issues/7047/events
https://github.com/huggingface/datasets/issues/7047
2,406,495,084
I_kwDODunzps6PcDNs
7,047
Save Dataset as Sharded Parquet
{ "avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4", "events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}", "followers_url": "https://api.github.com/users/tom-p-reichel/followers", "following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}", "gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tom-p-reichel", "id": 43631024, "login": "tom-p-reichel", "node_id": "MDQ6VXNlcjQzNjMxMDI0", "organizations_url": "https://api.github.com/users/tom-p-reichel/orgs", "received_events_url": "https://api.github.com/users/tom-p-reichel/received_events", "repos_url": "https://api.github.com/users/tom-p-reichel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions", "type": "User", "url": "https://api.github.com/users/tom-p-reichel", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "To anyone else who finds themselves in this predicament, it's possible to read the parquet file in the same way that datasets writes it, and then manually break it into pieces. Although, you need a couple of magic options (`thrift_*`) to deal with the huge metadata, otherwise pyarrow immediately crashes.\r\n```python\r\nimport pyarrow.parquet as pq\r\nimport pyarrow as pa\r\n\r\nr = pq.ParquetReader()\r\n\r\nr.open(\"./outrageous-file.parquet\",thrift_string_size_limit=2**31-1, thrift_container_size_limit=2**31-1)\r\n\r\nfrom more_itertools import chunked\r\nimport tqdm\r\n\r\nfor i,chunk in tqdm.tqdm(enumerate(chunked(range(r.num_row_groups),10000))):\r\n w = pq.ParquetWriter(f\"./chunks.parquet/chunk{i}.parquet\",schema=r.schema_arrow)\r\n for idx in chunk:\r\n w.write_table(r.read_row_group(idx))\r\n w.close()\r\n```", "You can also use `.shard()` and call `to_parquet()` on each shard in the meantime:\r\n\r\n```python\r\nnum_shards = 128\r\noutput_path_template = \"output_dir/{index:05d}.parquet\"\r\nfor index in range(num_shards):\r\n shard = ds.shard(index=index, num_shards=num_shards, contiguous=True)\r\n shard.to_parquet(output_path_template.format(index=index))\r\n```" ]
2024-07-12T23:47:51
2024-07-17T12:07:08
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request `to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically. ### Motivation This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet. ### Your contribution I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158 to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7047/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7047/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7041/comments
https://api.github.com/repos/huggingface/datasets/issues/7041/events
https://github.com/huggingface/datasets/issues/7041
2,404,576,038
I_kwDODunzps6PUusm
7,041
`sort` after `filter` unreasonably slow
{ "avatar_url": "https://avatars.githubusercontent.com/u/56711045?v=4", "events_url": "https://api.github.com/users/Tobin-rgb/events{/privacy}", "followers_url": "https://api.github.com/users/Tobin-rgb/followers", "following_url": "https://api.github.com/users/Tobin-rgb/following{/other_user}", "gists_url": "https://api.github.com/users/Tobin-rgb/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tobin-rgb", "id": 56711045, "login": "Tobin-rgb", "node_id": "MDQ6VXNlcjU2NzExMDQ1", "organizations_url": "https://api.github.com/users/Tobin-rgb/orgs", "received_events_url": "https://api.github.com/users/Tobin-rgb/received_events", "repos_url": "https://api.github.com/users/Tobin-rgb/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tobin-rgb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tobin-rgb/subscriptions", "type": "User", "url": "https://api.github.com/users/Tobin-rgb", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "`filter` add an indices mapping on top of the dataset, so `sort` has to gather all the rows that are kept to form a new Arrow table and sort the table. Gathering all the rows can take some time, but is a necessary step. You can try calling `ds = ds.flatten_indices()` before sorting to remove the indices mapping." ]
2024-07-12T03:29:27
2024-07-22T13:55:17
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug as the tittle says ... ### Steps to reproduce the bug `sort` seems to be normal. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) print("start sort") ds = ds.sort("k") print("finish sort") ``` but `sort` after `filter` is extremely slow. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) ds = ds.filter(lambda x:x > 100, input_columns="k") print("start sort") ds = ds.sort("k") print("finish sort") ``` ### Expected behavior Is this a bug, or is it a misuse of the `sort` function? ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7041/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7041/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7040/comments
https://api.github.com/repos/huggingface/datasets/issues/7040/events
https://github.com/huggingface/datasets/issues/7040
2,402,918,335
I_kwDODunzps6POZ-_
7,040
load `streaming=True` dataset with downloaded cache
{ "avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4", "events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}", "followers_url": "https://api.github.com/users/wanghaoyucn/followers", "following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}", "gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wanghaoyucn", "id": 39429965, "login": "wanghaoyucn", "node_id": "MDQ6VXNlcjM5NDI5OTY1", "organizations_url": "https://api.github.com/users/wanghaoyucn/orgs", "received_events_url": "https://api.github.com/users/wanghaoyucn/received_events", "repos_url": "https://api.github.com/users/wanghaoyucn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions", "type": "User", "url": "https://api.github.com/users/wanghaoyucn", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "When you pass `streaming=True`, the cache is ignored. The remote data URL is used instead and the data is streamed from the remote server.", "Thanks for your reply! So is there any solution to get my expected behavior besides clone the whole repo ? Or could I adjust my script to load the downloaded arrow files and generate the dataset streamingly?" ]
2024-07-11T11:14:13
2024-07-11T14:11:56
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 file descriptor. So we use `fsspec` as an interface like below: ```python def _generate_examples(self, filepath, split): for file in filepath: with fsspec.open(file, "rb") as fs: with h5py.File(fs, "r") as fp: # for event_id in sorted(list(fp.keys())): event_ids = list(fp.keys()) ...... ``` ### Steps to reproduce the bug The `fsspec` works, but it takes 10+ min to print the first 10 examples, which is even longer than the downloading time. I'm not sure if it just caches the whole hdf5 file and generates the examples. ### Expected behavior So does the following make sense so far? 1. download the files ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True) ``` 2. load the iterable dataset faster (using the raw file cache at path `.cache/huggingface/datasets/downloads`) ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True, streaming=true) ``` I made some tests, but the code above can't get the expected result. I'm not sure if this is supported. I also find the issue #6327 . It seemed similar to mine, but I couldn't find a solution. ### Environment info - `datasets` = 2.18.0 - `h5py` = 3.10.0 - `fsspec` = 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7040/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7040/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7037/comments
https://api.github.com/repos/huggingface/datasets/issues/7037/events
https://github.com/huggingface/datasets/issues/7037
2,400,192,419
I_kwDODunzps6PEAej
7,037
A bug of Dataset.to_json() function
{ "avatar_url": "https://avatars.githubusercontent.com/u/26499566?v=4", "events_url": "https://api.github.com/users/LinglingGreat/events{/privacy}", "followers_url": "https://api.github.com/users/LinglingGreat/followers", "following_url": "https://api.github.com/users/LinglingGreat/following{/other_user}", "gists_url": "https://api.github.com/users/LinglingGreat/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LinglingGreat", "id": 26499566, "login": "LinglingGreat", "node_id": "MDQ6VXNlcjI2NDk5NTY2", "organizations_url": "https://api.github.com/users/LinglingGreat/orgs", "received_events_url": "https://api.github.com/users/LinglingGreat/received_events", "repos_url": "https://api.github.com/users/LinglingGreat/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LinglingGreat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LinglingGreat/subscriptions", "type": "User", "url": "https://api.github.com/users/LinglingGreat", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Thanks for reporting, @LinglingGreat.\r\n\r\nI confirm this is a bug.", "@albertvillanova I would like to take a shot at this if you aren't working on it currently. Let me know!" ]
2024-07-10T09:11:22
2024-09-22T13:16:07
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again. The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size). ### Steps to reproduce the bug try this code: ```python from datasets import load_dataset import json train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"] output_path = "./harmless-base_hftojs.json" print(len(train_dataset)) train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2) with open(output_path, encoding="utf-8") as f: data = json.loads(f.read()) ``` it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709) Extra square brackets have appeared here: <img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc"> ### Expected behavior The code runs normally. ### Environment info datasets=2.20.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7037/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7037/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7035/comments
https://api.github.com/repos/huggingface/datasets/issues/7035/events
https://github.com/huggingface/datasets/issues/7035
2,400,021,225
I_kwDODunzps6PDWrp
7,035
Docs are not generated when a parameter defaults to a NamedSplit value
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-07-10T07:51:24
2024-07-26T07:51:53
2024-07-26T07:51:53
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like: ```python def call_function(split=Split.TRAIN): ... ``` The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'> See: https://github.com/huggingface/datasets/actions/runs/9869660902/job/27254359863?pr=7015 ``` Building the MDX files: 97%|█████████▋| 58/60 [00:00<00:00, 91.94it/s] Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 197, in build_mdx_files content, new_anchors, source_files, errors = resolve_autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 123, in resolve_autodoc doc = autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 499, in autodoc method_doc, check = document_object( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 395, in document_object signature = format_signature(obj) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 126, in format_signature if param.default != inspect._empty: File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 136, in __ne__ return not self.__eq__(other) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 379, in __eq__ raise ValueError(f"Equality not supported between split {self} and {other}") ValueError: Equality not supported between split train and <class 'inspect._empty'> The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/build.py", line 102, in build_command build_doc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 367, in build_doc anchors_mapping, source_files_mapping = build_mdx_files( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 230, in build_mdx_files raise type(e)(f"There was an error when converting {file} to the MDX format.\n" + e.args[0]) from e ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Equality not supported between split train and <class 'inspect._empty'> ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7035/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7035/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7033/comments
https://api.github.com/repos/huggingface/datasets/issues/7033/events
https://github.com/huggingface/datasets/issues/7033
2,397,419,768
I_kwDODunzps6O5bj4
7,033
`from_generator` does not allow to specify the split name
{ "avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4", "events_url": "https://api.github.com/users/pminervini/events{/privacy}", "followers_url": "https://api.github.com/users/pminervini/followers", "following_url": "https://api.github.com/users/pminervini/following{/other_user}", "gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pminervini", "id": 227357, "login": "pminervini", "node_id": "MDQ6VXNlcjIyNzM1Nw==", "organizations_url": "https://api.github.com/users/pminervini/orgs", "received_events_url": "https://api.github.com/users/pminervini/received_events", "repos_url": "https://api.github.com/users/pminervini/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pminervini/subscriptions", "type": "User", "url": "https://api.github.com/users/pminervini", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, @pminervini.\r\n\r\nI agree we should give the option to define the split name.\r\n\r\nIndeed, there is a PR that addresses precisely this issue:\r\n- #7015\r\n\r\nI am reviewing it.", "Booom! thank you guys :)" ]
2024-07-09T07:47:58
2024-07-26T12:56:16
2024-07-26T09:31:56
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:` It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py ### Steps to reproduce the bug ``` In [1]: from datasets import Dataset In [2]: def gen(): ...: yield {"pokemon": "bulbasaur", "type": "grass"} ...: In [3]: ds = Dataset.from_generator(gen) Generating train split: 1 examples [00:00, 133.89 examples/s] ``` ### Expected behavior It should be possible to specify any split name ### Environment info - `datasets` version: 2.19.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - `huggingface_hub` version: 0.23.3 - PyArrow version: 15.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7033/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7033/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7031/comments
https://api.github.com/repos/huggingface/datasets/issues/7031/events
https://github.com/huggingface/datasets/issues/7031
2,395,401,692
I_kwDODunzps6Oxu3c
7,031
CI quality is broken: use ruff check instead
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-07-08T11:42:24
2024-07-08T11:47:29
2024-07-08T11:47:29
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027 ``` error: `ruff <path>` has been removed. Use `ruff check <path>` instead. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7031/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7031/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7030/comments
https://api.github.com/repos/huggingface/datasets/issues/7030/events
https://github.com/huggingface/datasets/issues/7030
2,393,411,631
I_kwDODunzps6OqJAv
7,030
Add option to disable progress bar when reading a dataset ("Loading dataset from disk")
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "You can disable progress bars for all of `datasets` with `disable_progress_bars`. [Link](https://huggingface.co/docs/datasets/en/package_reference/utilities#datasets.enable_progress_bars)\r\n\r\nSo you could do something like:\r\n\r\n```python\r\nfrom datasets import load_from_disk, enable_progress_bars, disable_progress_bars\r\n\r\ndisable_progress_bars()\r\n# Your code\r\nload_from_disk(....)\r\n\r\nenable_progress_bars()\r\n```\r\n", "Thank you! Closing the issue." ]
2024-07-06T05:43:37
2024-07-13T14:35:59
2024-07-13T14:35:59
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16. ### Motivation I am reading a lot of datasets that it creates lots of logs. <img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-44b6-937c-932f01b4eb2a"> ### Your contribution Seems like an easy fix to make. I can create a PR if necessary.
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7030/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7030/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7029/comments
https://api.github.com/repos/huggingface/datasets/issues/7029/events
https://github.com/huggingface/datasets/issues/7029
2,391,366,696
I_kwDODunzps6OiVwo
7,029
load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error
{ "avatar_url": "https://avatars.githubusercontent.com/u/171606538?v=4", "events_url": "https://api.github.com/users/sugam-nexusflow/events{/privacy}", "followers_url": "https://api.github.com/users/sugam-nexusflow/followers", "following_url": "https://api.github.com/users/sugam-nexusflow/following{/other_user}", "gists_url": "https://api.github.com/users/sugam-nexusflow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sugam-nexusflow", "id": 171606538, "login": "sugam-nexusflow", "node_id": "U_kgDOCjqCCg", "organizations_url": "https://api.github.com/users/sugam-nexusflow/orgs", "received_events_url": "https://api.github.com/users/sugam-nexusflow/received_events", "repos_url": "https://api.github.com/users/sugam-nexusflow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sugam-nexusflow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sugam-nexusflow/subscriptions", "type": "User", "url": "https://api.github.com/users/sugam-nexusflow", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "hi ! can you share the full stack trace ? this should help locate what files is not written in the cache_dir" ]
2024-07-04T19:15:16
2024-07-17T12:44:03
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /tmp directory. ### Steps to reproduce the bug ```python d = load_dataset( path=hugging_face_link, split=split, token=token, cache_dir="/tmp/hugging_face_cache", ) ``` ### Expected behavior Everything written to the file system as part of the load_datasets function should be in the /tmp directory. ### Environment info datasets version: 2.16.1 Platform: Linux-5.10.216-225.855.amzn2.x86_64-x86_64-with-glibc2.26 Python version: 3.11.9 huggingface_hub version: 0.19.4 PyArrow version: 16.1.0 Pandas version: 2.2.2 fsspec version: 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7029/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7029/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7024/comments
https://api.github.com/repos/huggingface/datasets/issues/7024/events
https://github.com/huggingface/datasets/issues/7024
2,390,141,626
I_kwDODunzps6Odqq6
7,024
Streaming dataset not returning data
{ "avatar_url": "https://avatars.githubusercontent.com/u/91670254?v=4", "events_url": "https://api.github.com/users/johnwee1/events{/privacy}", "followers_url": "https://api.github.com/users/johnwee1/followers", "following_url": "https://api.github.com/users/johnwee1/following{/other_user}", "gists_url": "https://api.github.com/users/johnwee1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/johnwee1", "id": 91670254, "login": "johnwee1", "node_id": "U_kgDOBXbG7g", "organizations_url": "https://api.github.com/users/johnwee1/orgs", "received_events_url": "https://api.github.com/users/johnwee1/received_events", "repos_url": "https://api.github.com/users/johnwee1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/johnwee1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnwee1/subscriptions", "type": "User", "url": "https://api.github.com/users/johnwee1", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-07-04T07:21:47
2024-07-04T07:21:47
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly. I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning on the provided dataset. However, I'm doing some data preprocessing steps (filtering out entries), when I try to swap out the dataset for mine, it fails to train. However, I eventually fixed this by simply setting `stream=False` in `load_dataset`. Coud this be some sort of network / firewall issue I'm facing? ### Steps to reproduce the bug I made a post with greater description about how I reproduced this problem before I found my workaround: /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fproblem-with-custom-iterator-of-streaming-dataset-not-returning-anything%2F94551 Here is the problematic dataset snippet, which works when streaming=False (and with buffer keyword removed from shuffle) ``` commitpackft = load_dataset( "chargoddard/commitpack-ft-instruct", split="train", streaming=True ).filter(lambda example: example["language"] == "Python") def form_template(example): """Forms a template for each example following the alpaca format for CommitPack""" example["content"] = ( "### Human: " + example["instruction"] + " " + example["input"] + " ### Assistant: " + example["output"] ) return example dataset = commitpackft.map( form_template, remove_columns=["id", "language", "license", "instruction", "input", "output"], ).shuffle( seed=42, buffer_size=10000 ) # remove everything since its all inside "content" now validation_data = dataset.take(4000) train_data = dataset.skip(4000) ``` The annoying part about this is that it only fails during training and I don't know when it will fail, except that it always fails during evaluation. ### Expected behavior The expected behavior is that I should be able to get something from the iterator when called instead of getting nothing / stuck in a loop somewhere. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31 - Python version: 3.11.7 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7024/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7024/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7022/comments
https://api.github.com/repos/huggingface/datasets/issues/7022/events
https://github.com/huggingface/datasets/issues/7022
2,388,064,650
I_kwDODunzps6OVvmK
7,022
There is dead code after we require pyarrow >= 15.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-07-03T08:52:57
2024-07-03T09:17:36
2024-07-03T09:17:36
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
There are code lines specific for pyarrow versions < 15.0.0. However, we require pyarrow >= 15.0.0 since the merge of PR: - #6892 Those code lines are now dead code and should be removed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7022/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7022/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7020/comments
https://api.github.com/repos/huggingface/datasets/issues/7020/events
https://github.com/huggingface/datasets/issues/7020
2,387,940,990
I_kwDODunzps6OVRZ-
7,020
Casting list array to fixed size list raises error
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-07-03T07:54:49
2024-07-03T08:41:56
2024-07-03T08:41:56
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
When trying to cast a list array to fixed size list, an AttributeError is raised: > AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' Steps to reproduce the bug: ```python import pyarrow as pa from datasets.table import array_cast arr = pa.array([[0, 1]]) array_cast(arr, pa.list_(pa.int64(), 2)) ``` Stack trace: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-12-6cb90a1d8216> in <module> 3 4 arr = pa.array([[0, 1]]) ----> 5 array_cast(arr, pa.list_(pa.int64(), 2)) ~/huggingface/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs) 1802 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1803 else: -> 1804 return func(array, *args, **kwargs) 1805 1806 return wrapper ~/huggingface/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str) 1920 else: 1921 array_values = array.values[ -> 1922 array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length 1923 ] 1924 return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size) AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7020/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7020/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7018/comments
https://api.github.com/repos/huggingface/datasets/issues/7018/events
https://github.com/huggingface/datasets/issues/7018
2,383,700,286
I_kwDODunzps6OFGE-
7,018
`load_dataset` fails to load dataset saved by `save_to_disk`
{ "avatar_url": "https://avatars.githubusercontent.com/u/2307997?v=4", "events_url": "https://api.github.com/users/sliedes/events{/privacy}", "followers_url": "https://api.github.com/users/sliedes/followers", "following_url": "https://api.github.com/users/sliedes/following{/other_user}", "gists_url": "https://api.github.com/users/sliedes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sliedes", "id": 2307997, "login": "sliedes", "node_id": "MDQ6VXNlcjIzMDc5OTc=", "organizations_url": "https://api.github.com/users/sliedes/orgs", "received_events_url": "https://api.github.com/users/sliedes/received_events", "repos_url": "https://api.github.com/users/sliedes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sliedes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sliedes/subscriptions", "type": "User", "url": "https://api.github.com/users/sliedes", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "In my case the error was:\r\n```\r\nValueError: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.\r\n```\r\nDid you try `load_from_disk`?", "More generally, any reason there is no API consistency between save_to_disk and push_to_hub ? \r\n\r\nWould be nice to be able to save_to_disk and then upload manually to the hub and load_dataset (which works in some situations but not all)...", "I have the exact same problem !", "`load_from_disk` managed to load the dataset, but the bug with `load_dataset` needs to be fixed. " ]
2024-07-01T12:19:19
2024-12-03T11:26:17
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug This code fails to load the dataset it just saved: ```python from datasets import load_dataset from transformers import AutoTokenizer MODEL = "google-bert/bert-base-cased" tokenizer = AutoTokenizer.from_pretrained(MODEL) dataset = load_dataset("yelp_review_full") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets.save_to_disk("dataset") tokenized_datasets = load_dataset("dataset/") # raises ``` It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`. I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON: ```shell $ ls -l dataset/test -rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow -rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json -rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json ``` ### Steps to reproduce the bug Execute the code above. ### Expected behavior The dataset is loaded successfully. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39 - Python version: 3.12.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7018/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7018/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7016/comments
https://api.github.com/repos/huggingface/datasets/issues/7016/events
https://github.com/huggingface/datasets/issues/7016
2,383,262,608
I_kwDODunzps6ODbOQ
7,016
`drop_duplicates` method
{ "avatar_url": "https://avatars.githubusercontent.com/u/26205298?v=4", "events_url": "https://api.github.com/users/MohamedAliRashad/events{/privacy}", "followers_url": "https://api.github.com/users/MohamedAliRashad/followers", "following_url": "https://api.github.com/users/MohamedAliRashad/following{/other_user}", "gists_url": "https://api.github.com/users/MohamedAliRashad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MohamedAliRashad", "id": 26205298, "login": "MohamedAliRashad", "node_id": "MDQ6VXNlcjI2MjA1Mjk4", "organizations_url": "https://api.github.com/users/MohamedAliRashad/orgs", "received_events_url": "https://api.github.com/users/MohamedAliRashad/received_events", "repos_url": "https://api.github.com/users/MohamedAliRashad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MohamedAliRashad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohamedAliRashad/subscriptions", "type": "User", "url": "https://api.github.com/users/MohamedAliRashad", "user_view_type": "public" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "There is an open issue #2514 about this which also proposes solutions." ]
2024-07-01T09:01:06
2024-07-20T06:51:58
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request `drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one) ### Motivation Ease of use ### Your contribution I don't think i am good enough to help
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7016/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7016/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7013/comments
https://api.github.com/repos/huggingface/datasets/issues/7013/events
https://github.com/huggingface/datasets/issues/7013
2,382,976,738
I_kwDODunzps6OCVbi
7,013
CI is broken for faiss tests on Windows: node down: Not properly terminated
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-07-01T06:40:03
2024-07-01T07:10:28
2024-07-01T07:10:28
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached. See: https://github.com/huggingface/datasets/actions/runs/9712659783 ``` test (integration, windows-latest, deps-minimum) The job running on runner GitHub Actions 60 has exceeded the maximum execution time of 360 minutes. test (integration, windows-latest, deps-latest) The job running on runner GitHub Actions 238 has exceeded the maximum execution time of 360 minutes. ``` ``` ____________________________ tests/test_search.py _____________________________ [gw1] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw1' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ____________________________ tests/test_search.py _____________________________ [gw2] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw2' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ``` ``` tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw0] node down: Not properly terminated [gw0] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw0 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw1] node down: Not properly terminated [gw1] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw1 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw2] node down: Not properly terminated [gw2] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7013/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7013/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7010/comments
https://api.github.com/repos/huggingface/datasets/issues/7010/events
https://github.com/huggingface/datasets/issues/7010
2,379,777,480
I_kwDODunzps6N2IXI
7,010
Re-enable raising error from huggingface-hub FutureWarning in CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-06-28T07:23:40
2024-06-28T12:19:30
2024-06-28T12:19:29
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR: - #6876 Note that this can only be done once transformers releases the fix: - https://github.com/huggingface/transformers/pull/31007
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7010/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7010/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7008/comments
https://api.github.com/repos/huggingface/datasets/issues/7008/events
https://github.com/huggingface/datasets/issues/7008
2,379,591,141
I_kwDODunzps6N1a3l
7,008
Support ruff 0.5.0 in CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-06-28T05:11:26
2024-06-28T07:11:18
2024-06-28T07:11:18
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
Support ruff 0.5.0 in CI. Also revert: - #7007
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7008/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7008/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7006/comments
https://api.github.com/repos/huggingface/datasets/issues/7006/events
https://github.com/huggingface/datasets/issues/7006
2,379,581,543
I_kwDODunzps6N1Yhn
7,006
CI is broken after ruff-0.5.0: E721
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-06-28T05:03:28
2024-06-28T05:25:18
2024-06-28T05:25:18
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule. See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983 > src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7006/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7006/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7005/comments
https://api.github.com/repos/huggingface/datasets/issues/7005/events
https://github.com/huggingface/datasets/issues/7005
2,378,424,349
I_kwDODunzps6Nw-Ad
7,005
EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files
{ "avatar_url": "https://avatars.githubusercontent.com/u/117731544?v=4", "events_url": "https://api.github.com/users/Aki1991/events{/privacy}", "followers_url": "https://api.github.com/users/Aki1991/followers", "following_url": "https://api.github.com/users/Aki1991/following{/other_user}", "gists_url": "https://api.github.com/users/Aki1991/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aki1991", "id": 117731544, "login": "Aki1991", "node_id": "U_kgDOBwRw2A", "organizations_url": "https://api.github.com/users/Aki1991/orgs", "received_events_url": "https://api.github.com/users/Aki1991/received_events", "repos_url": "https://api.github.com/users/Aki1991/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aki1991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aki1991/subscriptions", "type": "User", "url": "https://api.github.com/users/Aki1991", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! `data_dir=` is for directories, can you try using `data_files=` instead ?", "If you are trying to load your image dataset from a local folder, you should replace \"data_dir=path/to/jsonl/metadata.jsonl\" with the real folder path in your computer.\r\n\r\nhttps://huggingface.co/docs/datasets/en/image_load#imagefolder", "Ah yes. My bad. I was giving file name. I should have given the folder directory as the path. That solved my issue. Thank you @albertvillanova and @lhoestq. " ]
2024-06-27T15:08:26
2024-06-28T09:56:19
2024-06-28T09:56:19
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files" ### Steps to reproduce the bug This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file. Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub). ```` from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl") ```` error: ```` EmptyDatasetError Traceback (most recent call last) Cell In[18], line 3 1 from datasets import load_dataset ----> 3 dataset = load_dataset("imagefolder", 4 data_dir="path/to/jsonl/file/metadata.jsonl") 5 dataset[0]["objects"] File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2589 verification_mode = VerificationMode( 2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2591 ) 2593 # Create a dataset builder -> 2594 builder_instance = load_dataset_builder( 2595 path=path, 2596 name=name, 2597 data_dir=data_dir, 2598 data_files=data_files, 2599 cache_dir=cache_dir, 2600 features=features, 2601 download_config=download_config, 2602 download_mode=download_mode, 2603 revision=revision, 2604 token=token, 2605 storage_options=storage_options, 2606 trust_remote_code=trust_remote_code, 2607 _require_default_config_name=name is None, 2608 **config_kwargs, 2609 ) 2611 # Return iterable dataset in case of streaming 2612 if streaming: File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2264 download_config = download_config.copy() if download_config else DownloadConfig() 2265 download_config.storage_options.update(storage_options) -> 2266 dataset_module = dataset_module_factory( 2267 path, 2268 revision=revision, 2269 download_config=download_config, 2270 download_mode=download_mode, 2271 data_dir=data_dir, 2272 data_files=data_files, 2273 cache_dir=cache_dir, 2274 trust_remote_code=trust_remote_code, 2275 _require_default_config_name=_require_default_config_name, 2276 _require_custom_configs=bool(config_kwargs), 2277 ) 2278 # Get dataset builder class from the processing script 2279 builder_kwargs = dataset_module.builder_kwargs File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1782 # We have several ways to get a dataset builder: 1783 # 1784 # - if path is the name of a packaged dataset module (...) 1796 1797 # Try packaged 1798 if path in _PACKAGED_DATASETS_MODULES: 1799 return PackagedDatasetModuleFactory( 1800 path, 1801 data_dir=data_dir, 1802 data_files=data_files, 1803 download_config=download_config, 1804 download_mode=download_mode, -> 1805 ).get_module() 1806 # Try locally 1807 elif path.endswith(filename): File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self) 1135 def get_module(self) -> DatasetModule: 1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix() 1137 patterns = ( 1138 sanitize_patterns(self.data_files) 1139 if self.data_files is not None -> 1140 else get_data_patterns(base_path, download_config=self.download_config) 1141 ) 1142 data_files = DataFilesDict.from_patterns( 1143 patterns, 1144 download_config=self.download_config, 1145 base_path=base_path, 1146 ) 1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config) 501 return _get_data_files_patterns(resolver) 502 except FileNotFoundError: --> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files` ``` ### Expected behavior It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files." ### Environment info I am using conda environment.
{ "avatar_url": "https://avatars.githubusercontent.com/u/117731544?v=4", "events_url": "https://api.github.com/users/Aki1991/events{/privacy}", "followers_url": "https://api.github.com/users/Aki1991/followers", "following_url": "https://api.github.com/users/Aki1991/following{/other_user}", "gists_url": "https://api.github.com/users/Aki1991/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aki1991", "id": 117731544, "login": "Aki1991", "node_id": "U_kgDOBwRw2A", "organizations_url": "https://api.github.com/users/Aki1991/orgs", "received_events_url": "https://api.github.com/users/Aki1991/received_events", "repos_url": "https://api.github.com/users/Aki1991/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aki1991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aki1991/subscriptions", "type": "User", "url": "https://api.github.com/users/Aki1991", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7005/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7005/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7001/comments
https://api.github.com/repos/huggingface/datasets/issues/7001/events
https://github.com/huggingface/datasets/issues/7001
2,372,930,879
I_kwDODunzps6NcA0_
7,001
Datasetbuilder Local Download FileNotFoundError
{ "avatar_url": "https://avatars.githubusercontent.com/u/12601271?v=4", "events_url": "https://api.github.com/users/purefall/events{/privacy}", "followers_url": "https://api.github.com/users/purefall/followers", "following_url": "https://api.github.com/users/purefall/following{/other_user}", "gists_url": "https://api.github.com/users/purefall/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/purefall", "id": 12601271, "login": "purefall", "node_id": "MDQ6VXNlcjEyNjAxMjcx", "organizations_url": "https://api.github.com/users/purefall/orgs", "received_events_url": "https://api.github.com/users/purefall/received_events", "repos_url": "https://api.github.com/users/purefall/repos", "site_admin": false, "starred_url": "https://api.github.com/users/purefall/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/purefall/subscriptions", "type": "User", "url": "https://api.github.com/users/purefall", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Ok it seems the solution is to use the directory string without the trailing \"/\" which in my case as: \r\n\r\n`parquet_dir = \"~/data/Parquet\" `\r\n\r\nStill i think this is a weird behavior... " ]
2024-06-25T15:02:34
2024-06-25T15:21:19
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError. I debug the code and it seems there is a bug there: So first it creates a .incomplete folder and before moving its contents the following code deletes the directory [Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984) hence as a result I face with: ``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '``` ### Steps to reproduce the bug ``` from datasets import load_dataset_builder from pathlib import Path parquet_dir = "~/data/Parquet/" Path(parquet_dir).mkdir(parents=True, exist_ok=True) builder = load_dataset_builder( "rotten_tomatoes", ) builder.download_and_prepare(parquet_dir, file_format="parquet") ``` ### Expected behavior Downloads the files and saves as parquet ### Environment info Ubuntu, Python 3.10 ``` datasets 2.19.1 ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7001/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7001/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7000/comments
https://api.github.com/repos/huggingface/datasets/issues/7000/events
https://github.com/huggingface/datasets/issues/7000
2,372,887,585
I_kwDODunzps6Nb2Qh
7,000
IterableDataset: Unsupported ScalarType BFloat16
{ "avatar_url": "https://avatars.githubusercontent.com/u/170015089?v=4", "events_url": "https://api.github.com/users/stoical07/events{/privacy}", "followers_url": "https://api.github.com/users/stoical07/followers", "following_url": "https://api.github.com/users/stoical07/following{/other_user}", "gists_url": "https://api.github.com/users/stoical07/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stoical07", "id": 170015089, "login": "stoical07", "node_id": "U_kgDOCiI5cQ", "organizations_url": "https://api.github.com/users/stoical07/orgs", "received_events_url": "https://api.github.com/users/stoical07/received_events", "repos_url": "https://api.github.com/users/stoical07/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stoical07/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stoical07/subscriptions", "type": "User", "url": "https://api.github.com/users/stoical07", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@lhoestq Thank you for merging #6607, but unfortunately the issue persists for `IterableDataset` :pensive: ", "Hi ! I opened https://github.com/huggingface/datasets/pull/7002 to fix this bug", "Amazing, thank you so much @lhoestq! :pray:" ]
2024-06-25T14:43:26
2024-06-25T16:04:00
2024-06-25T15:51:53
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug `IterableDataset.from_generator` crashes when using BFloat16: ``` File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor args = (obj.detach().cpu().numpy(),) ^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Got unsupported ScalarType BFloat16 ``` ### Steps to reproduce the bug ```python import torch from datasets import IterableDataset def demo(x): yield {"x": x} x = torch.tensor([1.], dtype=torch.bfloat16) dataset = IterableDataset.from_generator( demo, gen_kwargs=dict(x=x), ) example = next(iter(dataset)) print(example) ``` ### Expected behavior Code sample should print: ```python {'x': tensor([1.], dtype=torch.bfloat16)} ``` ### Environment info ``` datasets==2.20.0 torch==2.2.2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7000/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7000/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6997/comments
https://api.github.com/repos/huggingface/datasets/issues/6997/events
https://github.com/huggingface/datasets/issues/6997
2,371,966,127
I_kwDODunzps6NYVSv
6,997
CI is broken for tests using hf-internal-testing/librispeech_asr_dummy
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-06-25T07:55:44
2024-06-25T08:13:43
2024-06-25T08:13:43
MEMBER
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996 ``` FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other'] Right contains one more item: 'other' Full diff: [ 'clean', - 'other', ] FAILED tests/test_inspect.py::test_get_dataset_default_config_name[hf-internal-testing/librispeech_asr_dummy-None] - AssertionError: assert 'clean' is None ``` Note that repository was recently converted to Parquet: https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/commit/5be91486e11a2d616f4ec5db8d3fd248585ac07a
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6997/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6997/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6995/comments
https://api.github.com/repos/huggingface/datasets/issues/6995/events
https://github.com/huggingface/datasets/issues/6995
2,370,713,475
I_kwDODunzps6NTjeD
6,995
ImportError when importing datasets.load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/124846947?v=4", "events_url": "https://api.github.com/users/Leo-Lsc/events{/privacy}", "followers_url": "https://api.github.com/users/Leo-Lsc/followers", "following_url": "https://api.github.com/users/Leo-Lsc/following{/other_user}", "gists_url": "https://api.github.com/users/Leo-Lsc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Leo-Lsc", "id": 124846947, "login": "Leo-Lsc", "node_id": "U_kgDOB3EDYw", "organizations_url": "https://api.github.com/users/Leo-Lsc/orgs", "received_events_url": "https://api.github.com/users/Leo-Lsc/received_events", "repos_url": "https://api.github.com/users/Leo-Lsc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Leo-Lsc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Leo-Lsc/subscriptions", "type": "User", "url": "https://api.github.com/users/Leo-Lsc", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "What is the version of your installed `huggingface-hub`:\r\n```python\r\nimport huggingface_hub\r\nprint(huggingface_hub.__version__)\r\n```\r\n\r\nIt seems you have a very old version of `huggingface-hub`, where `CommitInfo` was not still implemented. You need to update it:\r\n```\r\npip install -U huggingface-hub\r\n```\r\n\r\nNote that `CommitInfo` was implemented in huggingface-hub 0.10.0 and datasets requires \"huggingface-hub>=0.21.2\"", "The version of my huggingface-hub is 0.23.4.", "The error message says there is no CommitInfo in your installed huggingface-hub library:\r\n```\r\nImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\\Anaconda3\\envs\\CS224S\\Lib\\site-packages\\huggingface_hub_init_.py)\r\n```\r\n\r\nAnd this is implemented since version 0.10.0:\r\n- https://github.com/huggingface/huggingface_hub/pull/1066", "I am getting the exact same issue when I `import datasets`. The version of my huggingface-hub is also 0.23.4. I dont see a solution in the comments. Not sure why is this issue closed?", "I closed the issue because the problem is not related to the `datasets` library.\r\n\r\nThe problem is with your local Python environment: it seems corrupted. You could try to remove it and regenerate it again.", "I have recreated my conda environment but still run into the same issue. Here is my environment:\r\n```\r\nconda create --name esm python=3.10\r\n conda activate esm\r\n conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia\r\n pip3 install -r requirements.txt\r\n```\r\nRequirements.txt\r\n```\r\naccelerate\r\ndatasets==2.20.0\r\npyfastx\r\ntransformers\r\nboto3\r\nhuggingface_hub==0.23.4\r\n```\r\n\r\nAnd then I get:\r\n```\r\n>>> import datasets\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/fsx/ubuntu/miniconda3/envs/esm2/lib/python3.10/site-packages/datasets/__init__.py\", line 17, in <module>\r\n from .arrow_dataset import Dataset\r\n File \"/fsx/ubuntu/miniconda3/envs/esm2/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 63, in <module>\r\n from huggingface_hub import (\r\nImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (/fsx/ubuntu/miniconda3/envs/esm2/lib/python3.10/site-packages/huggingface_hub/__init__.py)\r\n>>>\r\n```\r\n\r\n", "You can check:\r\n```\r\n>>> import huggingface_hub\r\n>>> print(huggingface_hub.__version__)\r\n```", "This is what I see:\r\n```\r\n>>> import huggingface_hub\r\n>>> print(huggingface_hub.__version__)\r\n0.23.4\r\n```", "Installing `chardet` makes it work for some reason" ]
2024-06-24T17:07:22
2024-11-14T01:42:09
2024-06-25T06:11:37
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'. ### Steps to reproduce the bug 1. pip install git+https://github.com/huggingface/datasets 2. from datasets import load_dataset ### Expected behavior ImportError Traceback (most recent call last) Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1) ----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset [3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train") [4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test") File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7 1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. [2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) # [3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License"); (...) [12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and [13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License. [15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0" ---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset [18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction [19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63 [61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc [62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs ---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import ( [64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo, [65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd, ... [70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) ) [71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile [72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Environment info Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub $ datasets-cli env Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module> File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module> from .arrow_dataset import Dataset File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module> from huggingface_hub import ( ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) (CS224S)
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6995/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6995/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6992/comments
https://api.github.com/repos/huggingface/datasets/issues/6992/events
https://github.com/huggingface/datasets/issues/6992
2,367,890,622
I_kwDODunzps6NIyS-
6,992
Dataset with streaming doesn't work with proxy
{ "avatar_url": "https://avatars.githubusercontent.com/u/57779173?v=4", "events_url": "https://api.github.com/users/YHL04/events{/privacy}", "followers_url": "https://api.github.com/users/YHL04/followers", "following_url": "https://api.github.com/users/YHL04/following{/other_user}", "gists_url": "https://api.github.com/users/YHL04/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YHL04", "id": 57779173, "login": "YHL04", "node_id": "MDQ6VXNlcjU3Nzc5MTcz", "organizations_url": "https://api.github.com/users/YHL04/orgs", "received_events_url": "https://api.github.com/users/YHL04/received_events", "repos_url": "https://api.github.com/users/YHL04/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YHL04/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YHL04/subscriptions", "type": "User", "url": "https://api.github.com/users/YHL04", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! can you try updating `datasets` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U datasets huggingface_hub\r\n```" ]
2024-06-22T16:12:08
2024-06-25T15:43:05
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both HTTP_PROXY and HTTPS_PROXY. streaming = False works fine. ### Steps to reproduce the bug use load_dataset with streaming = True in AIMOS ### Expected behavior does not hang indefinitely and loads batches to start training run ### Environment info _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_gnu conda-forge _pytorch_select 2.0 cuda_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 abseil-cpp 20220623.0 h9888cd1_6 conda-forge absl-py 1.0.0 py311h399429b_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 aiofiles 23.2.1 pyhd8ed1ab_0 conda-forge aiohttp 3.8.6 py311hf118e41_0 aiosignal 1.2.0 pyhd3eb1b0_0 archspec 0.2.3 pyhd8ed1ab_0 conda-forge arrow-cpp 11.0.0 ha3edaa6_5_cpu conda-forge async-timeout 4.0.2 py311h6ffa863_0 attrs 23.1.0 py311h6ffa863_0 av 10.0.0 py311he6153ed_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 aws-c-auth 0.6.24 hb81f6d7_5 conda-forge aws-c-cal 0.5.20 h3c2b4d9_6 conda-forge aws-c-common 0.8.11 h4194056_0 conda-forge aws-c-compression 0.2.16 ha19333d_3 conda-forge aws-c-event-stream 0.2.18 h12a9399_6 conda-forge aws-c-http 0.7.4 ha2cde00_2 conda-forge aws-c-io 0.13.17 h9189062_2 conda-forge aws-c-mqtt 0.8.6 h40d1a04_6 conda-forge aws-c-s3 0.2.4 hbdbe4f0_3 conda-forge aws-c-sdkutils 0.1.7 ha19333d_3 conda-forge aws-checksums 0.1.14 ha19333d_3 conda-forge aws-crt-cpp 0.19.7 hd018011_7 conda-forge aws-sdk-cpp 1.10.57 hb9575ba_4 conda-forge blas 1.0 openblas blinker 1.8.2 pyhd8ed1ab_0 conda-forge boltons 23.0.0 py311h6ffa863_0 boost-cpp 1.82.0 h25e6d66_2 bottleneck 1.3.5 py311h34f6284_0 brotli 1.0.9 hf118e41_7 brotli-bin 1.0.9 hf118e41_7 brotli-python 1.0.9 py311h4a02239_7 bzip2 1.0.8 h7b6447c_0 c-ares 1.19.1 hf118e41_0 ca-certificates 2024.6.2 h0f6029e_0 conda-forge cachetools 5.3.3 pyhd8ed1ab_0 conda-forge certifi 2024.6.2 pyhd8ed1ab_0 conda-forge cffi 1.15.1 py311hf118e41_3 charset-normalizer 2.0.4 pyhd3eb1b0_0 click 8.1.7 unix_pyh707e725_0 conda-forge conda 24.5.0 py311h1af927a_0 conda-forge conda-content-trust 0.2.0 py311h6ffa863_0 conda-libmamba-solver 23.11.1 py311h6ffa863_0 conda-package-handling 2.2.0 py311h6ffa863_0 conda-package-streaming 0.9.0 py311h6ffa863_0 contourpy 1.0.5 py311h25e6d66_0 cryptography 41.0.3 py311hb0e80e7_0 cudatoolkit 11.8.0 hedcfb66_13 conda-forge cudnn 8.9.2_11.8 h9ceb136_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 cycler 0.11.0 pyhd3eb1b0_0 datasets 2.12.0 py311h6ffa863_0 dill 0.3.6 py311h6ffa863_0 distro 1.9.0 pyhd8ed1ab_0 conda-forge ffmpeg 4.2.2 opence_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 filelock 3.9.0 py311h6ffa863_0 fmt 9.1.0 h25e6d66_0 fonttools 4.25.0 pyhd3eb1b0_0 freetype 2.12.1 hd23a775_0 frozendict 2.4.4 py311hb02d432_0 conda-forge frozenlist 1.4.0 py311hf118e41_0 fsspec 2023.9.2 py311h6ffa863_0 gflags 2.2.2 he6710b0_0 giflib 5.2.1 hf118e41_3 glog 0.6.0 hbe088e0_0 conda-forge gmp 6.3.0 h46f38da_0 conda-forge gmpy2 2.1.5 py311h2758da7_1 conda-forge google-auth 2.30.0 pyhff2d567_0 conda-forge google-auth-oauthlib 0.5.3 pyhd8ed1ab_0 conda-forge grpc-cpp 1.51.1 h8ba971d_1 conda-forge grpcio 1.54.3 py311h414e0d3_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 huggingface_hub 0.17.3 py311h6ffa863_0 icu 73.1 h4a02239_0 idna 3.4 py311h6ffa863_0 importlib-metadata 6.0.0 py311h6ffa863_0 jinja2 3.1.4 pyhd8ed1ab_0 conda-forge jpeg 9e hf118e41_1 jsonpatch 1.32 pyhd3eb1b0_0 jsonpointer 2.1 pyhd3eb1b0_0 kiwisolver 1.4.4 py311h4a02239_0 krb5 1.20.1 hc019ccd_1 lame 3.100 hb283c62_1003 conda-forge lcms2 2.12 h2045e0b_0 ld_impl_linux-ppc64le 2.38 hec883e6_1 lerc 3.0 h29c3540_0 leveldb 1.23 h24532b4_1 conda-forge libabseil 20220623.0 cxx17_h9235812_6 conda-forge libarchive 3.6.2 hd8ab008_2 libarrow 11.0.0 h837770b_5_cpu conda-forge libboost 1.82.0 haf51a6a_2 libbrotlicommon 1.0.9 hf118e41_7 libbrotlidec 1.0.9 hf118e41_7 libbrotlienc 1.0.9 hf118e41_7 libcrc32c 1.1.2 h3b9df90_0 conda-forge libcurl 8.4.0 h4d62439_0 libdeflate 1.17 hf118e41_1 libedit 3.1.20221030 hf118e41_0 libev 4.33 h140841e_1 libevent 2.1.10 h19c23f1_4 conda-forge libexpat 2.6.2 h46f38da_0 conda-forge libffi 3.4.4 h4a02239_0 libgcc-ng 13.2.0 h31e42bb_10 conda-forge libgfortran-ng 11.2.0 hb3889a9_1 libgfortran5 11.2.0 h1234567_1 libgomp 13.2.0 h31e42bb_10 conda-forge libgoogle-cloud 2.7.0 h11140b6_1 conda-forge libgrpc 1.51.1 h4d29a31_1 conda-forge libmamba 1.5.3 h7c6fafd_0 libmambapy 1.5.3 py311h828bf7b_0 libnghttp2 1.57.0 h44e5816_0 libnsl 2.0.1 ha17a0cc_0 conda-forge libopenblas 0.3.23 hc5a31fb_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 libopus 1.3.1 h4e0d66e_1 conda-forge libpng 1.6.39 hf118e41_0 libprotobuf 3.21.12 h1776448_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 libsolv 0.7.24 h0f529ac_0 libsqlite 3.45.3 hd4bbf49_0 conda-forge libssh2 1.10.0 h50fa78f_2 libstdcxx-ng 13.2.0 h262982c_10 conda-forge libthrift 0.18.0 h82f1162_0 conda-forge libtiff 4.5.1 h4a02239_0 libutf8proc 2.8.0 hb283c62_0 conda-forge libuuid 2.38.1 h4194056_0 conda-forge libvpx 1.13.1 h46f38da_0 conda-forge libwebp 1.3.2 h0f96ee2_0 libwebp-base 1.3.2 hf118e41_0 libxcrypt 4.4.36 ha17a0cc_1 conda-forge libxml2 2.10.4 h18e3229_1 libzlib 1.2.13 h1f2b957_6 conda-forge llvm-openmp 14.0.6 hc028133_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 lmdb 0.9.31 ha17a0cc_1 conda-forge lz4-c 1.9.4 h4a02239_0 markdown 3.4.4 pyhd8ed1ab_0 conda-forge markupsafe 2.1.5 py311h32d8acf_0 conda-forge matplotlib 3.8.0 py311h6ffa863_0 matplotlib-base 3.8.0 py311h52e1fcc_0 menuinst 2.1.1 py311h1af927a_0 conda-forge mpc 1.3.1 heaf1863_0 conda-forge mpfr 4.2.1 haad2271_1 conda-forge mpmath 1.3.0 pyhd8ed1ab_0 conda-forge multidict 6.0.2 py311hf118e41_0 multiprocess 0.70.14 py311h6ffa863_0 munkres 1.1.4 py_0 mypy_extensions 1.0.0 pyha770c72_0 conda-forge nccl 2.18.3 cuda11.8_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 ncurses 6.4 h4a02239_0 nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge networkx 2.8.8 pyhd8ed1ab_0 conda-forge nomkl 3.0 0 https://ftp.osuosl.org/pub/open-ce/1.10.0 numactl 2.0.16 hba61f60_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 numexpr 2.8.7 py311hc46fc55_0 numpy 1.24.3 py311h148a09e_0 numpy-base 1.24.3 py311h06b82f6_0 oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge openjpeg 2.4.0 hfe35807_0 openssl 3.3.1 h1f2b957_0 conda-forge orc 1.8.2 h341c9a4_2 conda-forge packaging 23.1 py311h6ffa863_0 pandas 2.1.1 py311h52e1fcc_0 pcre2 10.42 h280155c_0 pillow 10.0.1 py311he33076b_0 pip 23.3 py311h6ffa863_0 platformdirs 4.2.2 pyhd8ed1ab_0 conda-forge pluggy 1.0.0 py311h6ffa863_1 pooch 1.8.2 pyhd8ed1ab_0 conda-forge protobuf 4.21.12 py311ha7baec7_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 psutil 5.9.8 py311hd26027c_0 conda-forge pyarrow 11.0.0 py311h04a18d5_1 pyasn1 0.6.0 pyhd8ed1ab_0 conda-forge pyasn1-modules 0.4.0 pyhd8ed1ab_0 conda-forge pybind11-abi 4 hd3eb1b0_1 pycosat 0.6.6 py311hf118e41_0 pycparser 2.21 pyhd3eb1b0_0 pyjwt 2.8.0 pyhd8ed1ab_1 conda-forge pyopenssl 23.2.0 py311h6ffa863_0 pyparsing 3.0.9 py311h6ffa863_0 pyre-extensions 0.0.30 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 py311h6ffa863_0 python 3.11.8 h3332dee_0_cpython conda-forge python-dateutil 2.8.2 pyhd3eb1b0_0 python-tzdata 2023.3 pyhd3eb1b0_0 python-xxhash 2.0.2 py311hf118e41_1 python_abi 3.11 4_cp311 conda-forge pytorch 2.0.1 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 pytorch-base 2.0.1 cuda11.8_py311_pb4.21.12_4 https://ftp.osuosl.org/pub/open-ce/1.10.0 pytz 2023.3.post1 py311h6ffa863_0 pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge pyyaml 6.0.1 py311hf118e41_0 re2 2023.02.01 h883269e_0 conda-forge readline 8.2 hf118e41_0 regex 2023.10.3 py311hf118e41_0 reproc 14.2.4 h29c3540_1 reproc-cpp 14.2.4 h29c3540_1 requests 2.31.0 py311h6ffa863_0 requests-oauthlib 2.0.0 pyhd8ed1ab_0 conda-forge responses 0.13.3 pyhd3eb1b0_0 rsa 4.9 pyhd8ed1ab_0 conda-forge ruamel.yaml 0.17.21 py311hf118e41_0 s2n 1.3.37 h5e47323_0 conda-forge safetensors 0.4.0 py311hda16d9e_0 scipy 1.11.1 py311hd69e9bb_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 sentencepiece 0.1.97 h1e74c73_py311_pb4.21.12_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 setuptools 68.0.0 py311h6ffa863_0 six 1.16.0 pyhd3eb1b0_1 snappy 1.1.9 h29c3540_0 sqlite 3.41.2 hf118e41_0 sympy 1.12.1 pypyh2585a3b_103 conda-forge tabulate 0.8.10 pyhd8ed1ab_0 conda-forge tensorboard 2.13.0 pyhab0730d_pb4.21.12_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tensorboard-data-server 0.7.0 pyh6f84499_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tensorboard-plugin-wit 1.6.0 pyh9f0ad1d_0 conda-forge tk 8.6.13 hd4bbf49_0 conda-forge tokenizers 0.13.3 py311h3d4f45a_0 torchdata 0.6.0 py311_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 torchsnapshot 0.1.0 pyhd8ed1ab_0 conda-forge torchtext-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 torchtnt 0.2.4 pyhd8ed1ab_0 conda-forge torchvision-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tornado 6.3.3 py311hf118e41_0 tqdm 4.65.0 py311h7837921_0 transformers 4.32.1 py311h6ffa863_0 truststore 0.8.0 py311h6ffa863_0 typing-extensions 4.7.1 py311h6ffa863_0 typing_extensions 4.7.1 py311h6ffa863_0 typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge tzdata 2023c h04d1e81_0 urllib3 1.26.18 py311h6ffa863_0 utf8proc 2.6.1 h140841e_0 werkzeug 2.3.8 pyhd8ed1ab_0 conda-forge wheel 0.41.2 py311h6ffa863_0 xxhash 0.8.0 h140841e_3 xz 5.4.2 hf118e41_0 yaml 0.2.5 h7b6447c_0 yaml-cpp 0.8.0 h4a02239_0 yarl 1.8.1 py311hf118e41_0 zipp 3.11.0 py311h6ffa863_0 zlib 1.2.13 h1f2b957_6 conda-forge zstandard 0.19.0 py311hf118e41_0 zstd 1.5.5 h57e4825_0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6992/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6992/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6990/comments
https://api.github.com/repos/huggingface/datasets/issues/6990/events
https://github.com/huggingface/datasets/issues/6990
2,366,660,785
I_kwDODunzps6NEGCx
6,990
Problematic rank after calling `split_dataset_by_node` twice
{ "avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4", "events_url": "https://api.github.com/users/yzhangcs/events{/privacy}", "followers_url": "https://api.github.com/users/yzhangcs/followers", "following_url": "https://api.github.com/users/yzhangcs/following{/other_user}", "gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yzhangcs", "id": 18402347, "login": "yzhangcs", "node_id": "MDQ6VXNlcjE4NDAyMzQ3", "organizations_url": "https://api.github.com/users/yzhangcs/orgs", "received_events_url": "https://api.github.com/users/yzhangcs/received_events", "repos_url": "https://api.github.com/users/yzhangcs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions", "type": "User", "url": "https://api.github.com/users/yzhangcs", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "ah yes good catch ! feel free to open a PR with your suggested fix" ]
2024-06-21T14:25:26
2024-06-25T16:19:19
2024-06-25T16:19:19
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I'm trying to split `IterableDataset` by `split_dataset_by_node`. But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`. ### Steps to reproduce the bug Here is the minimal code for reproduction: ```py >>> from datasets import load_dataset >>> from datasets.distributed import split_dataset_by_node >>> dataset = load_dataset('fla-hub/slimpajama-test', split='train', streaming=True) >>> dataset = split_dataset_by_node(dataset, 1, 32) >>> dataset._distributed DistributedConfig(rank=1, world_size=32) >>> dataset = split_dataset_by_node(dataset, 1, 15) >>> dataset._distributed DistributedConfig(rank=481, world_size=480) ``` As you can see, the second rank 481 > 480, which is problematic. ### Expected behavior I think this error comes from this line @lhoestq https://github.com/huggingface/datasets/blob/a6ccf944e42c1a84de81bf326accab9999b86c90/src/datasets/iterable_dataset.py#L2943-L2944 We may need to obtain the rank first. Then the above code gives ```py >>> dataset._distributed DistributedConfig(rank=16, world_size=480) ``` ### Environment info datasets==2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6990/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6989/comments
https://api.github.com/repos/huggingface/datasets/issues/6989/events
https://github.com/huggingface/datasets/issues/6989
2,365,556,449
I_kwDODunzps6M_4bh
6,989
cache in nfs error
{ "avatar_url": "https://avatars.githubusercontent.com/u/66729924?v=4", "events_url": "https://api.github.com/users/simplew2011/events{/privacy}", "followers_url": "https://api.github.com/users/simplew2011/followers", "following_url": "https://api.github.com/users/simplew2011/following{/other_user}", "gists_url": "https://api.github.com/users/simplew2011/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/simplew2011", "id": 66729924, "login": "simplew2011", "node_id": "MDQ6VXNlcjY2NzI5OTI0", "organizations_url": "https://api.github.com/users/simplew2011/orgs", "received_events_url": "https://api.github.com/users/simplew2011/received_events", "repos_url": "https://api.github.com/users/simplew2011/repos", "site_admin": false, "starred_url": "https://api.github.com/users/simplew2011/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simplew2011/subscriptions", "type": "User", "url": "https://api.github.com/users/simplew2011", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hey @simplew2011 I am curious if you know of a workaround, or possible implications of letting the code run?" ]
2024-06-21T02:09:22
2025-01-29T11:44:04
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug - When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory - When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory - The default is to use the path of tempfile.tempdir - If I modify this path to the NFS disk, an error will be reported, but the program will continue to run - https://github.com/huggingface/datasets/blob/main/src/datasets/config.py#L257 ``` Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server server.serve_forever() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers finalizer() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir rmtree(tempdir) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) OSError: [Errno 16] Device or resource busy: '.nfs000000038330a012000030b4' Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server server.serve_forever() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers finalizer() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir rmtree(tempdir) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) OSError: [Errno 16] Device or resource busy: '.nfs0000000400064d4a000030e5' ``` ### Steps to reproduce the bug ``` import os import time import tempfile from datasets import load_dataset def add_column(sample): # print(type(sample)) # time.sleep(0.1) sample['__ds__stats__'] = {'data': 123} return sample def filt_column(sample): # print(type(sample)) if len(sample['content']) > 10: return True else: return False if __name__ == '__main__': input_dir = '/mnt/temp/CN/small' # some json dataset dataset = load_dataset('json', data_dir=input_dir) temp_dir = '/media/release/release/temp/temp' # a nfs folder os.makedirs(temp_dir, exist_ok=True) # change huggingface-datasets runtime cache in nfs(default in /tmp) tempfile.tempdir = temp_dir aa = dataset.map(add_column, num_proc=64) aa = aa.filter(filt_column, num_proc=64) print(aa) ``` ### Expected behavior no error occur ### Environment info datasets==2.18.0 ubuntu 20.04
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6989/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6989/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6985/comments
https://api.github.com/repos/huggingface/datasets/issues/6985/events
https://github.com/huggingface/datasets/issues/6985
2,362,378,276
I_kwDODunzps6Mzwgk
6,985
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
{ "avatar_url": "https://avatars.githubusercontent.com/u/26666267?v=4", "events_url": "https://api.github.com/users/firmai/events{/privacy}", "followers_url": "https://api.github.com/users/firmai/followers", "following_url": "https://api.github.com/users/firmai/following{/other_user}", "gists_url": "https://api.github.com/users/firmai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/firmai", "id": 26666267, "login": "firmai", "node_id": "MDQ6VXNlcjI2NjY2MjY3", "organizations_url": "https://api.github.com/users/firmai/orgs", "received_events_url": "https://api.github.com/users/firmai/received_events", "repos_url": "https://api.github.com/users/firmai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/firmai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/firmai/subscriptions", "type": "User", "url": "https://api.github.com/users/firmai", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Please note that the error is raised just at import:\r\n```python\r\nimport pyarrow.parquet as pq\r\n```\r\n\r\nTherefore it must be caused by some problem with your pyarrow installation. I would recommend you uninstall and install pyarrow again.\r\n\r\nI also see that it seems you use conda to install pyarrow. Please note that pyarrow offers 3 different packages in conda-forge: https://arrow.apache.org/docs/python/install.html#using-conda\r\n```\r\nconda install -c conda-forge pyarrow\r\n```\r\n> While the pyarrow [conda-forge](https://conda-forge.org/) package is the right choice for most users, both a minimal and maximal variant of the package exist, either of which may be better for your use case. See [Differences between conda-forge packages](https://arrow.apache.org/docs/python/install.html#python-conda-differences).\r\n\r\nPlease, make sure you install the right one: I guess it is either `pyarrow` (or `pyarrow-all`).", "I have same issue, please downgrade pyarrow==15.0.2, it seem datasets library need to be fix", "It is not a problem with the `datasets` library: we support latest version of `pyarrow` and our Continuous Integration tests are using pyarrow 16.1.0 without any problem.\r\n\r\nThe error reported here is raised when importing pyarrow.parquet:\r\n```\r\n---> 29 import pyarrow.parquet as pq\r\n```\r\n```\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/__init__.py:20\r\n 1 # Licensed to the Apache Software Foundation (ASF) under one\r\n 2 # or more contributor license agreements. See the NOTICE file\r\n 3 # distributed with this work for additional information\r\n (...)\r\n 17 \r\n 18 # flake8: noqa\r\n---> 20 from .core import *\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/core.py:33\r\n 30 import pyarrow as pa\r\n 32 try:\r\n---> 33 import pyarrow._parquet as _parquet\r\n 34 except ImportError as exc:\r\n 35 raise ImportError(\r\n 36 \"The pyarrow installation is not built with support \"\r\n 37 f\"for the Parquet file format ({str(exc)})\"\r\n 38 ) from None\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/_parquet.pyx:1, in init pyarrow._parquet()\r\n\r\nAttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'\r\n```\r\n\r\nThis can only be explained if pyarrow was not properly installed. \r\n\r\nIf the user just installed `pyarrow-core` from conda-forge, then its parquet subpackage is not installed and cannot be imported. You can check pyarrow docs:\r\n- Differences between conda-forge packages: https://arrow.apache.org/docs/python/install.html#python-conda-differences\r\n> The `pyarrow-core` package includes the following functionality:\r\n> ...\r\n> The `pyarrow` package adds the following:\r\n> ...\r\n> Parquet (i.e., `pyarrow.parquet`)", "I'm still seeing the same issue on datasets version 2.20.0. I installed pyarrow version 17.0.0 with `pip install`. Downgrading to pyarrow==15.0.2 also did not resolve the issue.", "@RenaLu As of UTC time 07/27/2024 23:20:00, I hit the same issue and reinstalling `pyarrow==15.0.2` resolved the issue for me. You may want to check if your `pyarrow` is successfully downgraded.", "I can confirm @albertvillanova's [analysis & suggestion](https://github.com/huggingface/datasets/issues/6985#issuecomment-2188022888) - `pip uninstall pyarrow` followed by `pip install pyarrow` solved it for me. \r\n\r\nI suspect this is because pyarrow was initially installed as a pandas extra `pandas[...,parquet,...]`, then pip-upgrading pyarrow resulted in the issue.\r\n\r\n@RenaLu did you uninstall pyarrow between changing versions?", "After trying all the above combinations and failing, running the following in the notebook fixed the error!!\r\n`!conda install -c conda-forge -y datasets pyarrow libparquet`\r\nNote : Uninstall any existing dataset and pyarrow installations in the env before executing the above.", "If on colab, remember to restart the runtime so the new pyarrow is imported. I also upgraded pip which is recommended in pyarrow's installation instructions.", "fixed doing this: !pip install --upgrade datasets\r\n\r\n!pip show pyarrow\r\n!pip show datasets\r\n!pip uninstall -y pyarrow\r\n!pip install pyarrow --no-cache-dir\r\n!pip install pyarrow\r\n!pip install transformers\r\n!pip install --upgrade datasets\r\n!pip install datasets\r\n! pip install pyarrow\r\n! pip install pyarrow.parquet\r\n!pip install transformers\r\n\r\n# Import necessary libraries\r\nfrom datasets import load_dataset\r\nimport pyarrow.parquet as pq\r\nimport pyarrow.lib as lib\r\nimport pandas as pd\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments\r\n", "but now i cant run test, so i remove it, ERROR: Could not find a version that satisfies the requirement pyarrow.parquet (from versions: none)\r\nERROR: No matching distribution found for pyarrow.parquet will still running but will tell you this", "I have the same question right now, python3.12 and transformers4.44.2, I have not fixed it", "I did most of the suggestions above and I still got the error, but after restarting my computer the error was fixed", "how to fix this, still have this error. ", "have we figured out what causes it?\n" ]
2024-06-19T13:22:28
2025-03-14T18:47:53
2024-06-25T05:40:51
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I have been struggling with this for two days, any help would be appreciated. Python 3.10 ``` from setfit import SetFitModel from huggingface_hub import login access_token_read = "cccxxxccc" # Authenticate with the Hugging Face Hub login(token=access_token_read) # Load the models from the Hugging Face Hub trainer_relv = SetFitModel.from_pretrained("snowdere/trainer_relevance") trainer_trust = SetFitModel.from_pretrained("snowdere/trainer_trust") trainer_sent = SetFitModel.from_pretrained("snowdere/trainer_sent") trainer_topic = SetFitModel.from_pretrained("snowdere/trainer_topic") ``` ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 1 ----> 1 from setfit import SetFitModel 2 from huggingface_hub import login 4 access_token_read = "ccsddsds" File /opt/conda/lib/python3.10/site-packages/setfit/__init__.py:7 4 import os 5 import warnings ----> 7 from .data import get_templated_dataset, sample_dataset 8 from .model_card import SetFitModelCardData 9 from .modeling import SetFitHead, SetFitModel File /opt/conda/lib/python3.10/site-packages/setfit/data.py:5 3 import pandas as pd 4 import torch ----> 5 from datasets import Dataset, DatasetDict, load_dataset 6 from torch.utils.data import Dataset as TorchDataset 8 from . import logging File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18 1 # ruff: noqa 2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. 3 # (...) 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 16 __version__ = "2.19.0" ---> 18 from .arrow_dataset import Dataset 19 from .arrow_reader import ReadInstruction 20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:76 73 from tqdm.contrib.concurrent import thread_map 75 from . import config ---> 76 from .arrow_reader import ArrowReader 77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 78 from .data_files import sanitize_patterns File /opt/conda/lib/python3.10/site-packages/datasets/arrow_reader.py:29 26 from typing import TYPE_CHECKING, List, Optional, Union 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 32 from .download.download_config import DownloadConfig File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/__init__.py:20 1 # Licensed to the Apache Software Foundation (ASF) under one 2 # or more contributor license agreements. See the NOTICE file 3 # distributed with this work for additional information (...) 17 18 # flake8: noqa ---> 20 from .core import * File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/core.py:33 30 import pyarrow as pa 32 try: ---> 33 import pyarrow._parquet as _parquet 34 except ImportError as exc: 35 raise ImportError( 36 "The pyarrow installation is not built with support " 37 f"for the Parquet file format ({str(exc)})" 38 ) from None File /opt/conda/lib/python3.10/site-packages/pyarrow/_parquet.pyx:1, in init pyarrow._parquet() AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' ``` setfit: 1.0.3 transformers: 4.41.2 lingua-language-detector: 2.0.2 polars: 0.20.31 lightning: None google-cloud-bigquery: 3.24.0 shapely: 2.0.4 pyarrow: 16.0.0 ### Steps to reproduce the bug I have tried all version combinations for Dataset and Pyarrow, the all have the same error since a few days ago. This is accross multiple scripts I have. ### Expected behavior Just ron normally. ### Environment info 3.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6985/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6985/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6984/comments
https://api.github.com/repos/huggingface/datasets/issues/6984/events
https://github.com/huggingface/datasets/issues/6984
2,362,143,554
I_kwDODunzps6My3NC
6,984
Convert polars DataFrame back to datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4", "events_url": "https://api.github.com/users/ljw20180420/events{/privacy}", "followers_url": "https://api.github.com/users/ljw20180420/followers", "following_url": "https://api.github.com/users/ljw20180420/following{/other_user}", "gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ljw20180420", "id": 38550511, "login": "ljw20180420", "node_id": "MDQ6VXNlcjM4NTUwNTEx", "organizations_url": "https://api.github.com/users/ljw20180420/orgs", "received_events_url": "https://api.github.com/users/ljw20180420/received_events", "repos_url": "https://api.github.com/users/ljw20180420/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions", "type": "User", "url": "https://api.github.com/users/ljw20180420", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Hi ! Thanks for reporting :)\r\n\r\nWe don't support `large_list` yet, though it should be added to `Sequence` IMO (maybe with a parameter `large=True` ?)" ]
2024-06-19T11:38:48
2024-08-12T14:43:46
2024-08-12T14:43:46
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request This returns error. ```python from datasets import Dataset dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]}) Dataset.from_polars(dsdf.to_polars()) ``` ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent. ### Motivation When datasets contain Sequence data type, it will be converted to Arrow type large_list. However, the reverse (from large_list to Sequence) does not work. ### Your contribution No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6984/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6984/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6982/comments
https://api.github.com/repos/huggingface/datasets/issues/6982/events
https://github.com/huggingface/datasets/issues/6982
2,361,661,469
I_kwDODunzps6MxBgd
6,982
cannot split dataset when using load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17721894?v=4", "events_url": "https://api.github.com/users/cybest0608/events{/privacy}", "followers_url": "https://api.github.com/users/cybest0608/followers", "following_url": "https://api.github.com/users/cybest0608/following{/other_user}", "gists_url": "https://api.github.com/users/cybest0608/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cybest0608", "id": 17721894, "login": "cybest0608", "node_id": "MDQ6VXNlcjE3NzIxODk0", "organizations_url": "https://api.github.com/users/cybest0608/orgs", "received_events_url": "https://api.github.com/users/cybest0608/received_events", "repos_url": "https://api.github.com/users/cybest0608/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cybest0608/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cybest0608/subscriptions", "type": "User", "url": "https://api.github.com/users/cybest0608", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "it seems the bug will happened in all windows system, I tried it in windows8.1, 10, 11 and all of them failed. But it won't happened in the Linux(Ubuntu and Centos7) and Mac (both my virtual and physical machine). I still don't know what the problem is. May be related to the path? I cannot run the split file in my windows server which created in Linux (even I replace the path in the arrow document)....work for it for a week but still cannot fix it .....upset", "Have you properly logged in? Are you using the a valid token?\r\n\r\nNote that this dataset is gated and you must follow the right procedure to be able to access it. You can find more info in the docs: https://huggingface.co/docs/hub/datasets-gated#access-gated-datasets-as-a-user", "> Have you properly logged in? Are you using the a valid token?\r\n> \r\n> Note that this dataset is gated and you must follow the right procedure to be able to access it. You can find more info in the docs: https://huggingface.co/docs/hub/datasets-gated#access-gated-datasets-as-a-user\r\n\r\nI finally found it what happened. It is not about the logging. When I copy the dataset from its original path (C:/Users/cybes/.cache/huggingface/datasets/downloads/extracted/XXX/cv-corpus-7.0-2021-07-21) to the desktop and load each tsv in it one by one , when I load the test spilt, the following warning occurs:\r\n\"ArrowInvalid: Failed to parse string: 'Benchmark' as a scalar of type double\"\r\n\r\nThen I manually deleted them in the \"segment\", the error won't happen anymore, even I replace the original path with these revised tsv and use the previous loading method (common_voice_train = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"train\", trust_remote_code=True)). It can work properly." ]
2024-06-19T08:07:16
2024-07-08T06:20:16
2024-07-08T06:20:16
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document, This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for days, even I load the datasets from local path, it can Generating train split and validation split but bug happen again in test split. ### Steps to reproduce the bug from datasets import load_dataset, load_metric, Audio common_voice_train = load_dataset("mozilla-foundation/common_voice_7_0", "ja", split="train", token=selftoken, trust_remote_code=True) ### Expected behavior ``` { "name": "ValueError", "message": "Instruction \"train\" corresponds to no data!", "stack": "--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[2], line 3 1 from datasets import load_dataset, load_metric, Audio ----> 3 common_voice_train = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"train\",token='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')#,trust_remote_code=True)#,streaming=True) 4 common_voice_test = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"test\",token='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')#,trust_remote_code=True)#,streaming=True) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\load.py:2626, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2622 # Build dataset for splits 2623 keep_in_memory = ( 2624 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2625 ) -> 2626 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2627 # Rename and cast features to match task schema 2628 if task is not None: 2629 # To avoid issuing the same warning twice File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1266, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1263 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS) 1265 # Create a dataset for each of the given splits -> 1266 datasets = map_nested( 1267 partial( 1268 self._build_single_dataset, 1269 run_post_process=run_post_process, 1270 verification_mode=verification_mode, 1271 in_memory=in_memory, 1272 ), 1273 split, 1274 map_tuple=True, 1275 disable_tqdm=True, 1276 ) 1277 if isinstance(datasets, dict): 1278 datasets = DatasetDict(datasets) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\utils\\py_utils.py:484, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 482 if batched: 483 data_struct = [data_struct] --> 484 mapped = function(data_struct) 485 if batched: 486 mapped = mapped[0] File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1296, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory) 1293 split = Split(split) 1295 # Build base dataset -> 1296 ds = self._as_dataset( 1297 split=split, 1298 in_memory=in_memory, 1299 ) 1300 if run_post_process: 1301 for resource_file_name in self._post_processing_resources(split).values(): File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1370, in DatasetBuilder._as_dataset(self, split, in_memory) 1368 if self._check_legacy_cache(): 1369 dataset_name = self.name -> 1370 dataset_kwargs = ArrowReader(cache_dir, self.info).read( 1371 name=dataset_name, 1372 instructions=split, 1373 split_infos=self.info.splits.values(), 1374 in_memory=in_memory, 1375 ) 1376 fingerprint = self._get_dataset_fingerprint(split) 1377 return Dataset(fingerprint=fingerprint, **dataset_kwargs) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\arrow_reader.py:256, in BaseReader.read(self, name, instructions, split_infos, in_memory) 254 msg = f'Instruction \"{instructions}\" corresponds to no data!' 255 #msg = f'Instruction \"{self._path}\",\"{name}\",\"{instructions}\",\"{split_infos}\" corresponds to no data!' --> 256 raise ValueError(msg) 257 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) ValueError: Instruction \"train\" corresponds to no data!" } ``` ### Environment info Environment: python 3.9 windows 11 pro VScode+jupyter
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6982/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6982/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6980/comments
https://api.github.com/repos/huggingface/datasets/issues/6980/events
https://github.com/huggingface/datasets/issues/6980
2,360,909,930
I_kwDODunzps6MuKBq
6,980
Support NumPy 2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4", "events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}", "followers_url": "https://api.github.com/users/NeilGirdhar/followers", "following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}", "gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NeilGirdhar", "id": 730137, "login": "NeilGirdhar", "node_id": "MDQ6VXNlcjczMDEzNw==", "organizations_url": "https://api.github.com/users/NeilGirdhar/orgs", "received_events_url": "https://api.github.com/users/NeilGirdhar/received_events", "repos_url": "https://api.github.com/users/NeilGirdhar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions", "type": "User", "url": "https://api.github.com/users/NeilGirdhar", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[]
2024-06-18T23:30:22
2024-07-12T12:04:54
2024-07-12T12:04:53
CONTRIBUTOR
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Support NumPy 2.0. ### Motivation NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API. Besides that, NumPy 2 provides a cleaner interface than NumPy 1. ### Tasks NumPy 2.0 was released for testing so that libraries could ensure compatibility [since mid-March](https://github.com/numpy/numpy/issues/24300#issuecomment-1986815755). What needs to be done for HuggingFace to support Numpy 2? - [x] Fix use of `array`: https://github.com/huggingface/datasets/pull/6976 - [ ] Remove [NumPy version limit](https://github.com/huggingface/datasets/pull/6975): https://github.com/huggingface/datasets/pull/6991
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6980/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6980/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6979/comments
https://api.github.com/repos/huggingface/datasets/issues/6979/events
https://github.com/huggingface/datasets/issues/6979
2,360,175,363
I_kwDODunzps6MrWsD
6,979
How can I load partial parquet files only?
{ "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucasjinreal", "id": 21303438, "login": "lucasjinreal", "node_id": "MDQ6VXNlcjIxMzAzNDM4", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "type": "User", "url": "https://api.github.com/users/lucasjinreal", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hello,\r\n\r\nHave you tried loading the dataset in streaming mode? [Documentation](https://huggingface.co/docs/datasets/v2.20.0/stream)\r\n\r\nThis way you wouldn't have to load it all. Also, let's be nice to Parquet, it's a really nice technology and we don't need to be mean :)", "I have downloaded part of it, just want to know how to load part of it, stream mode is not work for me since my network (in china) not stable, I don't want do it all again and again.\r\n\r\nJust curious, doesn't there a way to load part of it?", "Could you convert the IterableDataset to a Dataset after taking the first 100 rows with `.take`? This way, you would have a local copy of the first 100 rows on your system and thus won't need to download. Would that work?\r\n\r\nHere is a [SO question](https://stackoverflow.com/questions/76227219/can-i-convert-an-iterabledataset-to-dataset) detailing how to do the conversion.", "I mean, the parquet is like:\r\n\r\n00000-0143554\r\n00001-0143554\r\n00002-0143554\r\n...\r\n00100-0143554\r\n...\r\n09100-0143554\r\n\r\nI just downloaded the first 9900 part of it. \r\n\r\nI can not load with load_dataset, it throw an error says my file is not same as parquet all amount.\r\n\r\nHow could I load the only I have? \r\n\r\n( I really don't want downlaod them all, cause, I don't need all, and pulus, its huge.... )\r\n\r\nAs I said, I have donwloaded about 9999... It's not about stream... I just wnat to konw how to load offline... part....", "Hi, @lucasjinreal.\r\n\r\nI am not sure of understanding your issue. What is the error message and stack trace you get? What version of `datasets` are you using? Could you provide a reproducible example?\r\n\r\nWithout knowing all those details, I would naively say that you can load whatever number of Parquet files by using the \"parquet\" loader: https://huggingface.co/docs/datasets/loading#parquet\r\n```python\r\nds = load_dataset(\"parquet\", data_files=\"data/train-001*-of-00314.parquet\", split=\"train\")\r\n```", "@albertvillanova Not sure you have tested with this or not, but I have tried,\r\n\r\nthe only error I got is it still laodding all parquet with a progress bar maxium to the whole number 014354, and it loads my 0 - 000999 part, then throws an error.\r\n\r\nSays Numinfo is not same.\r\n\r\nI am so confused,", "Yes, my code snippet works.\n\nCould you copy-paste your code and the output? Otherwise we are not able to know what the issue is.", "@albertvillanova Hi, thanks for the tracing of the issue.\r\n\r\nThis is the output:\r\n\r\n```\r\nython get_llava_recap_cc3m.py\r\nGenerating train split: 3%|███▋ | 101910/3199866 [00:16<08:30, 6065.67 examples/s]\r\nTraceback (most recent call last):\r\n File \"get_llava_recap_cc3m.py\", line 31, in <module>\r\n dataset = load_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\")\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1118, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/info_utils.py\", line 101, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=156885281898.75, num_examples=3199866, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=4994080770, num_examples=101910, shard_lengths=[10191, 10291, 10291, 10291, 10291, 10191, 10191, 10291, 10291, 9591], dataset_name='llava-recap-cc3m')}]\r\n```\r\n\r\nthis is my code:\r\n\r\n```\r\ndataset = load_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\")\r\n```\r\n\r\nMy situation and requirements:\r\n\r\n00314 is all, but I downlaode about 150, half of it, as you can see, i used `0000*-of-00314.` which should be at most 99 file being loaded.\r\n\r\nBut it just fail.\r\n\r\nCan u understand my issue now?\r\n\r\nIf so, then **do not** suggest me with stream, Just want to know, is there a way to load part if it...... **and please don't say you can not replicate my issue when you have downloaded them all**, my english is not good, but I think all situations and all prerequists I have addressed already.\r\n\r\n", "I see you did not use the \"parquet\" loader as I suggested in my code snippet above: https://github.com/huggingface/datasets/issues/6979#issuecomment-2182031415\r\nPlease try passing \"parquet\" instead of \"llava-recap-cc3m/\" to `load_dataset`, and the complete path to data files in `data_files`:\r\n```python\r\nload_dataset(\"parquet\", data_files=\"llava-recap-cc3m/data/train-001*-of-00314.parquet\")\r\n```", "Let me explain that you get the error because of this content within the `dataset_info` YAML tag in the `llava-recap-cc3m/README.md`:\r\n```\r\n - name: train\r\n num_bytes: 156885281898.75\r\n num_examples: 3199866\r\n```\r\n\r\nBy default, if there is that content in the README file, `load_dataset` performs a basic check to verify it the generated number of examples matches the expected one and raises a `NonMatchingSplitsSizesError` if that is not the case. \r\n\r\nYou can avoid this basic check by passing `verification_mode=\"no_checks\"`:\r\n```python\r\nload_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\", verification_mode=\"no_checks\")\r\n```", "And please, next time you have an issue, please fill the Bug template issue with all the necessary information: https://github.com/huggingface/datasets/issues/new?assignees=&labels=&projects=&template=bug-report.yml\r\n\r\nOtherwise it is very difficult for us to understand the underlying problem and to propose a pertinent solution.", "thank u albert!\r\n\r\nIt solved my issue!" ]
2024-06-18T15:44:16
2024-06-21T17:09:32
2024-06-21T13:32:50
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it. dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet") How can I just using 000 - 100 from a 00314 from all partially? I search whole net didn't found a solution, **this is stupid if they didn't support it, and I swear I wont using stupid parquet any more**
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6979/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6979/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6977/comments
https://api.github.com/repos/huggingface/datasets/issues/6977/events
https://github.com/huggingface/datasets/issues/6977
2,359,295,045
I_kwDODunzps6Mn_xF
6,977
load json file error with v2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4", "events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}", "followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers", "following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}", "gists_url": "https://api.github.com/users/xiaoyaolangzhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiaoyaolangzhi", "id": 15037766, "login": "xiaoyaolangzhi", "node_id": "MDQ6VXNlcjE1MDM3NzY2", "organizations_url": "https://api.github.com/users/xiaoyaolangzhi/orgs", "received_events_url": "https://api.github.com/users/xiaoyaolangzhi/received_events", "repos_url": "https://api.github.com/users/xiaoyaolangzhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiaoyaolangzhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaoyaolangzhi/subscriptions", "type": "User", "url": "https://api.github.com/users/xiaoyaolangzhi", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Thanks for reporting, @xiaoyaolangzhi.\r\n\r\nIndeed, we are currently requiring `pandas` >= 2.0.0.\r\n\r\nYou will need to update pandas in your local environment:\r\n```\r\npip install -U pandas\r\n``` ", "Thank you very much." ]
2024-06-18T08:41:01
2024-06-18T10:06:10
2024-06-18T10:06:09
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug ``` load_dataset(path="json", data_files="./test.json") ``` ``` Generating train split: 0 examples [00:00, ? examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1997, in _prepare_split_single for _, table in generator: File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 155, in _generate_tables df = pd.read_json(f, dtype_backend="pyarrow") File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) TypeError: read_json() got an unexpected keyword argument 'dtype_backend' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/app/t1.py", line 11, in <module> load_dataset(path=data_path, data_files="./t2.json") File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2616, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1029, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1124, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1884, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 2040, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` ``` import pandas as pd with open("./test.json", "r") as f: df = pd.read_json(f, dtype_backend="pyarrow") ``` ``` Traceback (most recent call last): File "/app/t3.py", line 3, in <module> df = pd.read_json(f, dtype_backend="pyarrow") File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) TypeError: read_json() got an unexpected keyword argument 'dtype_backend' ``` ### Steps to reproduce the bug . ### Expected behavior . ### Environment info ``` datasets 2.20.0 pandas 1.5.3 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4", "events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}", "followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers", "following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}", "gists_url": "https://api.github.com/users/xiaoyaolangzhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiaoyaolangzhi", "id": 15037766, "login": "xiaoyaolangzhi", "node_id": "MDQ6VXNlcjE1MDM3NzY2", "organizations_url": "https://api.github.com/users/xiaoyaolangzhi/orgs", "received_events_url": "https://api.github.com/users/xiaoyaolangzhi/received_events", "repos_url": "https://api.github.com/users/xiaoyaolangzhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiaoyaolangzhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaoyaolangzhi/subscriptions", "type": "User", "url": "https://api.github.com/users/xiaoyaolangzhi", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6977/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6977/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6973/comments
https://api.github.com/repos/huggingface/datasets/issues/6973/events
https://github.com/huggingface/datasets/issues/6973
2,355,517,362
I_kwDODunzps6MZley
6,973
IndexError during training with Squad dataset and T5-small model
{ "avatar_url": "https://avatars.githubusercontent.com/u/151521233?v=4", "events_url": "https://api.github.com/users/ramtunguturi36/events{/privacy}", "followers_url": "https://api.github.com/users/ramtunguturi36/followers", "following_url": "https://api.github.com/users/ramtunguturi36/following{/other_user}", "gists_url": "https://api.github.com/users/ramtunguturi36/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ramtunguturi36", "id": 151521233, "login": "ramtunguturi36", "node_id": "U_kgDOCQgH0Q", "organizations_url": "https://api.github.com/users/ramtunguturi36/orgs", "received_events_url": "https://api.github.com/users/ramtunguturi36/received_events", "repos_url": "https://api.github.com/users/ramtunguturi36/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ramtunguturi36/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ramtunguturi36/subscriptions", "type": "User", "url": "https://api.github.com/users/ramtunguturi36", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "add remove_unused_columns=False to training_args\r\nhttps://github.com/huggingface/datasets/issues/6535#issuecomment-1874024704", "Closing this issue because it was a reported and fixed in transformers." ]
2024-06-16T07:53:54
2024-07-01T11:25:40
2024-07-01T11:25:40
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Describe the bug I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility. ### Steps to reproduce the bug 1.Install the required libraries: !pip install transformers datasets 2.Run the following code: !pip install transformers datasets import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TrainingArguments, Trainer, DataCollatorWithPadding # Load a small, publicly available dataset from datasets import load_dataset dataset = load_dataset("squad", split="train[:100]") # Use a small subset for testing # Load a pre-trained model and tokenizer model_name = "t5-small" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # Define a basic data collator data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # Define training arguments training_args = TrainingArguments( output_dir="./results", per_device_train_batch_size=2, num_train_epochs=1, ) # Create a trainer trainer = Trainer( model=model, args=training_args, train_dataset=dataset, data_collator=data_collator, ) # Train the model trainer.train() ### Expected behavior --------------------------------------------------------------------------- IndexError Traceback (most recent call last) [<ipython-input-23-f13a4b23c001>](https://localhost:8080/#) in <cell line: 34>() 32 33 # Train the model ---> 34 trainer.train() 10 frames [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 427 if isinstance(key, int): 428 if (key < 0 and key + size < 0) or (key >= size): --> 429 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") 430 return 431 elif isinstance(key, slice): IndexError: Invalid key: 42 is out of bounds for size 0 ### Environment info transformers version:4.41.2 datasets version:1.18.4 Python version:3.10.12
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6973/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6973/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6967/comments
https://api.github.com/repos/huggingface/datasets/issues/6967/events
https://github.com/huggingface/datasets/issues/6967
2,349,146,398
I_kwDODunzps6MBSEe
6,967
Method to load Laion400m
{ "avatar_url": "https://avatars.githubusercontent.com/u/6862868?v=4", "events_url": "https://api.github.com/users/humanely/events{/privacy}", "followers_url": "https://api.github.com/users/humanely/followers", "following_url": "https://api.github.com/users/humanely/following{/other_user}", "gists_url": "https://api.github.com/users/humanely/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/humanely", "id": 6862868, "login": "humanely", "node_id": "MDQ6VXNlcjY4NjI4Njg=", "organizations_url": "https://api.github.com/users/humanely/orgs", "received_events_url": "https://api.github.com/users/humanely/received_events", "repos_url": "https://api.github.com/users/humanely/repos", "site_admin": false, "starred_url": "https://api.github.com/users/humanely/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/humanely/subscriptions", "type": "User", "url": "https://api.github.com/users/humanely", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2024-06-12T16:04:04
2024-06-12T16:04:04
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99 ### Motivation The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings files s,ealessly. ### Your contribution I cam write the loader with some help.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6967/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6967/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6961/comments
https://api.github.com/repos/huggingface/datasets/issues/6961/events
https://github.com/huggingface/datasets/issues/6961
2,342,022,418
I_kwDODunzps6LmG0S
6,961
Manual downloads should count as downloads
{ "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/umarbutler", "id": 8473183, "login": "umarbutler", "node_id": "MDQ6VXNlcjg0NzMxODM=", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "repos_url": "https://api.github.com/users/umarbutler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "type": "User", "url": "https://api.github.com/users/umarbutler", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "We're unlikely to add more features/support for datasets with python loading scripts, which include datasets with manual download. Sorry for the inconvenience" ]
2024-06-09T04:52:06
2024-06-13T16:05:00
null
NONE
{ "completed": 0, "percent_completed": 0, "total": 0 }
null
### Feature request I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats ### Motivation This would ensure that downloads are accurately reported to end users. ### Your contribution N/A
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6961/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6961/timeline
null
null
null
null
false