url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 51
51
| id
int64 1.14B
2.92B
| node_id
stringlengths 18
18
| number
int64 3.75k
7.46k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
3
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | body
stringlengths 1
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
null | pull_request
null | is_pull_request
bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3770
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3770/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3770/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3770/events
|
https://github.com/huggingface/datasets/issues/3770
| 1,146,336,667
|
I_kwDODunzps5EU7Wb
| 3,770
|
DuplicatedKeysError on msr_sqa dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kolk",
"id": 9049591,
"login": "kolk",
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"repos_url": "https://api.github.com/users/kolk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kolk",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @kolk.\r\n\r\nWe are fixing it. "
] | 2022-02-22T00:43:33
| 2022-02-22T08:12:39
| 2022-02-22T08:12:39
|
NONE
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
### Describe the bug
Failure to generate dataset msr_sqa because of duplicate keys.
### Steps to reproduce the bug
```
from datasets import load_dataset
load_dataset("msr_sqa")
```
### Expected results
The examples keys should be unique.
**Actual results**
```
>>> load_dataset("msr_sqa")
Downloading:
6.72k/? [00:00<00:00, 148kB/s]
Downloading:
2.93k/? [00:00<00:00, 53.8kB/s]
Using custom data configuration default
Downloading and preparing dataset msr_sqa/default (download: 4.57 MiB, generated: 26.25 MiB, post-processed: Unknown size, total: 30.83 MiB) to /root/.cache/huggingface/datasets/msr_sqa/default/0.0.0/70b2a497bd3cc8fc960a3557d2bad1eac5edde824505e15c9c8ebe4c260fd4d1...
Downloading: 100%
4.80M/4.80M [00:00<00:00, 7.49MB/s]
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator)
1080 example = self.info.features.encode_example(record)
-> 1081 writer.write(example, key)
1082 finally:
8 frames
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: nt-639
Keys should be unique and deterministic in nature
During handling of the above exception, another exception occurred:
DuplicatedKeysError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in check_duplicate_keys(self)
449 for hash, key in self.hkey_record:
450 if hash in tmp_record:
--> 451 raise DuplicatedKeysError(key)
452 else:
453 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: nt-639
Keys should be unique and deterministic in nature
```
### Environment info
datasets version: 1.18.3
Platform: Google colab notebook
Python version: 3.7
PyArrow version: 6.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3770/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3770/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3769
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3769/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3769/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3769/events
|
https://github.com/huggingface/datasets/issues/3769
| 1,146,258,023
|
I_kwDODunzps5EUoJn
| 3,769
|
`dataset = dataset.map()` causes faiss index lost
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13076552?v=4",
"events_url": "https://api.github.com/users/Oaklight/events{/privacy}",
"followers_url": "https://api.github.com/users/Oaklight/followers",
"following_url": "https://api.github.com/users/Oaklight/following{/other_user}",
"gists_url": "https://api.github.com/users/Oaklight/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Oaklight",
"id": 13076552,
"login": "Oaklight",
"node_id": "MDQ6VXNlcjEzMDc2NTUy",
"organizations_url": "https://api.github.com/users/Oaklight/orgs",
"received_events_url": "https://api.github.com/users/Oaklight/received_events",
"repos_url": "https://api.github.com/users/Oaklight/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Oaklight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oaklight/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Oaklight",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! Indeed `map` is dropping the index right now, because one can create a dataset with more or fewer rows using `map` (and therefore the index might not be relevant anymore)\r\n\r\nI guess we could check the resulting dataset length, and if the user hasn't changed the dataset size we could keep the index, what do you think ?",
"doing `.add_column(\"x\",x_data)` also removes the index. the new column might be irrelevant to the index so I don't think it should drop. \r\n\r\nMinimal example\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nimport numpy as np\r\n\r\ndata=load_dataset(\"ceyda/cats_vs_dogs_sample\") #just a test dataset\r\ndata=data[\"train\"]\r\nembd_data=data.map(lambda x: {\"emb\":np.random.uniform(-1,0,50).astype(np.float32)})\r\nembd_data.add_faiss_index(column=\"emb\")\r\nprint(embd_data.list_indexes())\r\nembd_data=embd_data.add_column(\"x\",[0]*data.num_rows)\r\nprint(embd_data.list_indexes())\r\n```",
"I agree `add_column` shouldn't drop the index indeed ! Is it something you'd like to contribute ? I think it's just a matter of copying the `self._indexes` dictionary to the output dataset"
] | 2022-02-21T21:59:23
| 2022-06-27T14:56:29
| null |
NONE
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Describe the bug
assigning the resulted dataset to original dataset causes lost of the faiss index
## Steps to reproduce the bug
`my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure
```python
self.dataset.add_faiss_index('embeddings')
self.dataset.list_indexes()
# ['embeddings']
dataset2 = my_dataset.map(
lambda x: self._get_nearest_examples_batch(x['text']), batch=True
)
# the unexpected result:
dataset2.list_indexes()
# []
self.dataset.list_indexes()
# ['embeddings']
```
in case something wrong with my `_get_nearest_examples_batch()`, it's like this
```python
def _get_nearest_examples_batch(self, examples, k=5):
queries = embed(examples)
scores_batch, retrievals_batch = self.dataset.get_nearest_examples_batch(self.faiss_column, queries, k)
return {
'neighbors': [batch['text'] for batch in retrievals_batch],
'scores': scores_batch
}
```
## Expected results
`map` shouldn't drop the indexes, in another word, indexes should be carried to the generated dataset
## Actual results
map drops the indexes
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Ubuntu 20.04.3 LTS
- Python version: 3.8.12
- PyArrow version: 7.0.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3769/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3769/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3764
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3764/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3764/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3764/events
|
https://github.com/huggingface/datasets/issues/3764
| 1,145,107,050
|
I_kwDODunzps5EQPJq
| 3,764
|
!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/77545307?v=4",
"events_url": "https://api.github.com/users/LesiaFedorenko/events{/privacy}",
"followers_url": "https://api.github.com/users/LesiaFedorenko/followers",
"following_url": "https://api.github.com/users/LesiaFedorenko/following{/other_user}",
"gists_url": "https://api.github.com/users/LesiaFedorenko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LesiaFedorenko",
"id": 77545307,
"login": "LesiaFedorenko",
"node_id": "MDQ6VXNlcjc3NTQ1MzA3",
"organizations_url": "https://api.github.com/users/LesiaFedorenko/orgs",
"received_events_url": "https://api.github.com/users/LesiaFedorenko/received_events",
"repos_url": "https://api.github.com/users/LesiaFedorenko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LesiaFedorenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LesiaFedorenko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LesiaFedorenko",
"user_view_type": "public"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-02-20T19:05:43
| 2022-02-21T08:55:58
| 2022-02-21T08:55:58
|
NONE
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3764/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3764/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3763
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3763/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3763/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3763/events
|
https://github.com/huggingface/datasets/issues/3763
| 1,145,099,878
|
I_kwDODunzps5EQNZm
| 3,763
|
It's not possible download `20200501.pt` dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4",
"events_url": "https://api.github.com/users/jvanz/events{/privacy}",
"followers_url": "https://api.github.com/users/jvanz/followers",
"following_url": "https://api.github.com/users/jvanz/following{/other_user}",
"gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jvanz",
"id": 1514798,
"login": "jvanz",
"node_id": "MDQ6VXNlcjE1MTQ3OTg=",
"organizations_url": "https://api.github.com/users/jvanz/orgs",
"received_events_url": "https://api.github.com/users/jvanz/received_events",
"repos_url": "https://api.github.com/users/jvanz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvanz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jvanz",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @jvanz, thanks for reporting.\r\n\r\nPlease note that Wikimedia website does not longer host Wikipedia dumps for so old dates.\r\n\r\nFor a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/\r\n\r\nYou can load for example `20220220` `pt` Wikipedia:\r\n```python\r\ndataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n```",
"> ```python\r\n> dataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n> ```\r\n\r\nThank you! I did not know that I can do this. I was following the example in the error message when I do not define which language dataset I'm trying to download.\r\n\r\nI've tried something similar changing the date in the `load_dataset` call that I've shared in the bug description. Obviously, it did not work. I need to read the docs more carefully next time. My bad!\r\n\r\nThanks again and sorry for the noise.\r\n\r\n"
] | 2022-02-20T18:34:58
| 2022-02-21T12:06:12
| 2022-02-21T09:25:06
|
NONE
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Describe the bug
The dataset `20200501.pt` is broken.
The available datasets: https://dumps.wikimedia.org/ptwiki/
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
```
## Expected results
I expect to download the dataset locally.
## Actual results
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475...
/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features.
warnings.warn(
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare
super()._download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators
downloaded_files = dl_manager.download_and_extract({"info": info_url})
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested
mapped = [
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested
return function(data_struct)
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json
```
## Environment info
```
- `datasets` version: 1.18.3
- Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 6.0.1
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3763/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3763/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3762
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3762/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3762/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3762/events
|
https://github.com/huggingface/datasets/issues/3762
| 1,144,849,557
|
I_kwDODunzps5EPQSV
| 3,762
|
`Dataset.class_encode` should support custom class names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @Dref360, thanks a lot for your proposal.\r\n\r\nIt totally makes sense to have more flexibility when class encoding, I agree.\r\n\r\nYou could even further customize the class encoding by passing an instance of `ClassLabel` itself (instead of replicating `ClassLabel` instantiation arguments as `Dataset.class_encode_column` arguments).\r\n\r\nAnd the latter made me think of `Dataset.cast_column`...\r\n\r\nMaybe better to have some others' opinions @lhoestq @mariosasko ",
"Hi @Dref360! You can use [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/package_reference/main_classes.html#datasets.Dataset.align_labels_with_mapping) after `Dataset.class_encode_column` to assign a different mapping of labels to ids.\r\n\r\n@albertvillanova I'd like to avoid adding more complexity to the API where it's not (absolutely) needed, so I don't think introducing a new param in `Dataset.class_encode_column` is a good idea.\r\n\r\n",
"I wasn't aware that it existed thank you for the link.\n\nClosing then! "
] | 2022-02-19T21:21:45
| 2022-02-21T12:16:35
| 2022-02-21T12:16:35
|
CONTRIBUTOR
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
I can make a PR, just wanted approval before starting.
**Is your feature request related to a problem? Please describe.**
It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing.
https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1235
**Describe the solution you'd like**
I would like to add a **optional** parameter `class_names` to `class_encode_column` that would be used for the mapping instead of sorting the unique values.
**Describe alternatives you've considered**
One can use map instead. I find it harder to read.
```python
CLASS_NAMES = ['apple', 'orange', 'potato']
ds = ds.map(lambda item: CLASS_NAMES.index(item[label_column]))
# Proposition
ds = ds.class_encode_column(label_column, CLASS_NAMES)
```
**Additional context**
I can make the PR if this feature is accepted.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3762/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3762/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3761
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3761/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3761/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3761/events
|
https://github.com/huggingface/datasets/issues/3761
| 1,144,830,702
|
I_kwDODunzps5EPLru
| 3,761
|
Know your data for HF hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20128202?v=4",
"events_url": "https://api.github.com/users/Muhtasham/events{/privacy}",
"followers_url": "https://api.github.com/users/Muhtasham/followers",
"following_url": "https://api.github.com/users/Muhtasham/following{/other_user}",
"gists_url": "https://api.github.com/users/Muhtasham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Muhtasham",
"id": 20128202,
"login": "Muhtasham",
"node_id": "MDQ6VXNlcjIwMTI4MjAy",
"organizations_url": "https://api.github.com/users/Muhtasham/orgs",
"received_events_url": "https://api.github.com/users/Muhtasham/received_events",
"repos_url": "https://api.github.com/users/Muhtasham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Muhtasham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muhtasham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Muhtasham",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @Muhtasham you should take a look at https://huggingface.co/blog/data-measurements-tool and accompanying demo app at https://huggingface.co/spaces/huggingface/data-measurements-tool\r\n\r\nWe would be interested in your feedback. cc @meg-huggingface @sashavor @yjernite "
] | 2022-02-19T19:48:47
| 2022-02-21T14:15:23
| 2022-02-21T14:15:23
|
NONE
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
**Is your feature request related to a problem? Please describe.**
Would be great to see be able to understand datasets with the goal of improving data quality, and helping mitigate fairness and bias issues.
**Describe the solution you'd like**
Something like https://knowyourdata.withgoogle.com/ for HF hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20128202?v=4",
"events_url": "https://api.github.com/users/Muhtasham/events{/privacy}",
"followers_url": "https://api.github.com/users/Muhtasham/followers",
"following_url": "https://api.github.com/users/Muhtasham/following{/other_user}",
"gists_url": "https://api.github.com/users/Muhtasham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Muhtasham",
"id": 20128202,
"login": "Muhtasham",
"node_id": "MDQ6VXNlcjIwMTI4MjAy",
"organizations_url": "https://api.github.com/users/Muhtasham/orgs",
"received_events_url": "https://api.github.com/users/Muhtasham/received_events",
"repos_url": "https://api.github.com/users/Muhtasham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Muhtasham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muhtasham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Muhtasham",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3761/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3761/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3760
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3760/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3760/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3760/events
|
https://github.com/huggingface/datasets/issues/3760
| 1,144,804,558
|
I_kwDODunzps5EPFTO
| 3,760
|
Unable to view the Gradio flagged call back dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36753484?v=4",
"events_url": "https://api.github.com/users/kingabzpro/events{/privacy}",
"followers_url": "https://api.github.com/users/kingabzpro/followers",
"following_url": "https://api.github.com/users/kingabzpro/following{/other_user}",
"gists_url": "https://api.github.com/users/kingabzpro/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kingabzpro",
"id": 36753484,
"login": "kingabzpro",
"node_id": "MDQ6VXNlcjM2NzUzNDg0",
"organizations_url": "https://api.github.com/users/kingabzpro/orgs",
"received_events_url": "https://api.github.com/users/kingabzpro/received_events",
"repos_url": "https://api.github.com/users/kingabzpro/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kingabzpro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingabzpro/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kingabzpro",
"user_view_type": "public"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @kingabzpro.\r\n\r\nI think you need to create a loading script that creates the dataset from the CSV file and the image paths.\r\n\r\nAs example, you could have a look at the Food-101 dataset: https://huggingface.co/datasets/food101\r\n- Loading script: https://huggingface.co/datasets/food101/blob/main/food101.py\r\n\r\nOnce the loading script is created, the viewer will show a previsualization of your dataset. ",
"@albertvillanova I don't think this is the issue. I have created another dataset with similar files and format and it works. https://huggingface.co/datasets/kingabzpro/savtadepth-flags-V2",
"Yes, you are right, that was not the issue.\r\n\r\nJust take into account that sometimes the viewer can take some time until it shows the preview of the dataset.\r\nAfter some time, yours is finally properly shown: https://huggingface.co/datasets/kingabzpro/savtadepth-flags",
"The problem was resolved by deleted the dataset and creating new one with similar name and then clicking on flag button.",
"I think if you make manual changes to dataset the whole system breaks. "
] | 2022-02-19T17:45:08
| 2022-03-22T07:12:11
| 2022-03-22T07:12:11
|
NONE
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://huggingface.co/spaces/kingabzpro/savtadepth.*
Am I the one who added this dataset ? Yes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3760/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3760/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3758
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3758/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3758/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3758/events
|
https://github.com/huggingface/datasets/issues/3758
| 1,143,366,393
|
I_kwDODunzps5EJmL5
| 3,758
|
head_qa file missing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"We usually find issues with files hosted at Google Drive...\r\n\r\nIn this case we download the Google Drive Virus scan warning instead of the data file.",
"Fixed: https://huggingface.co/datasets/head_qa/viewer/en/train. Thanks\r\n\r\n<img width=\"1551\" alt=\"Capture d’écran 2022-02-28 à 15 29 04\" src=\"https://user-images.githubusercontent.com/1676121/156000224-fd3f62c6-8b54-4df1-8911-bdcb0bac3f1a.png\">\r\n"
] | 2022-02-18T16:32:43
| 2022-02-28T14:29:18
| 2022-02-21T14:39:19
|
COLLABORATOR
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Describe the bug
A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json)
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("head_qa", name="en")
```
## Expected results
The dataset should be loaded
## Actual results
```
Downloading and preparing dataset head_qa/en (download: 75.69 MiB, generated: 2.69 MiB, post-processed: Unknown size, total: 78.38 MiB) to /home/slesage/.cache/huggingface/datasets/head_qa/en/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9...
Downloading data: 2.21kB [00:00, 2.05MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t']
```
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.11.0-1028-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 6.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3758/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3758/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3756
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3756/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3756/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3756/events
|
https://github.com/huggingface/datasets/issues/3756
| 1,143,273,825
|
I_kwDODunzps5EJPlh
| 3,756
|
Images get decoded when using `map()` with `input_columns` argument on a dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"events_url": "https://api.github.com/users/kklemon/events{/privacy}",
"followers_url": "https://api.github.com/users/kklemon/followers",
"following_url": "https://api.github.com/users/kklemon/following{/other_user}",
"gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kklemon",
"id": 1430243,
"login": "kklemon",
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"organizations_url": "https://api.github.com/users/kklemon/orgs",
"received_events_url": "https://api.github.com/users/kklemon/received_events",
"repos_url": "https://api.github.com/users/kklemon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kklemon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kklemon",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null |
[
"Hi! If I'm not mistaken, this behavior is intentional, but I agree it could be more intuitive.\r\n\r\n@albertvillanova Do you remember why you decided not to decode columns in the `Audio` feature PR when `input_columns` is not `None`? IMO we should decode those columns, and we don't even have to use lazy structures here because the user explicitly requires them in the map transform. \r\n\r\ncc @lhoestq for visibility",
"I think I excluded to decorate the function when `input_columns` were passed as a quick fix for some non-passing tests: \r\n- https://github.com/huggingface/datasets/pull/2324/commits/9d7c3e8fa53e23ec636859b4407eeec904b1b3f9\r\n\r\nThat PR was quite complex and I decided to focus on the main feature requests, leaving refinements for subsequent PRs.\r\n\r\nNote that when `input_columns` are passed, the signature of the function is effectively changed, while the decorated function expects an item (whether an example or a batch) as first arg (which is not the case when passing `input_columns`.\r\n\r\nI agree we should consider supporting the case when `input_columns` are passed."
] | 2022-02-18T15:35:38
| 2022-12-13T16:59:06
| 2022-12-13T16:59:06
|
NONE
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Describe the bug
The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances.
However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image data is passed as raw byte representation to the mapping function.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torchvision import transforms
from PIL.Image import Image
dataset = load_dataset('mnist', split='train')
def transform_all_columns(example):
# example['image'] is encoded as PIL Image
assert isinstance(example['image'], Image)
return example
def transform_image_column(image):
# image is decoded here and represented as raw bytes
assert isinstance(image, Image)
return image
# single-sample dataset for debugging purposes
dev = dataset.select([0])
dev.map(transform_all_columns)
dev.map(transform_image_column, input_columns='image')
```
## Expected results
Image data should be passed in decoded form, i.e. as PIL Image objects to the mapping function unless the `decode` attribute on the image feature is set to `False`.
## Actual results
The mapping function receives images as raw byte data.
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-5.11.0-49-generic-x86_64-with-glibc2.32
- Python version: 3.8.0b4
- PyArrow version: 7.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3756/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3756/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3755
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3755/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3755/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3755/events
|
https://github.com/huggingface/datasets/issues/3755
| 1,143,032,961
|
I_kwDODunzps5EIUyB
| 3,755
|
Cannot preview dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4",
"events_url": "https://api.github.com/users/frascuchon/events{/privacy}",
"followers_url": "https://api.github.com/users/frascuchon/followers",
"following_url": "https://api.github.com/users/frascuchon/following{/other_user}",
"gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frascuchon",
"id": 2518789,
"login": "frascuchon",
"node_id": "MDQ6VXNlcjI1MTg3ODk=",
"organizations_url": "https://api.github.com/users/frascuchon/orgs",
"received_events_url": "https://api.github.com/users/frascuchon/received_events",
"repos_url": "https://api.github.com/users/frascuchon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frascuchon",
"user_view_type": "public"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting. The dataset viewer depends on some backend treatments, and for now, they might take some hours to get processed. We're working on improving it.",
"It has finally been processed. Thanks for the patience.",
"Thanks for the info @severo !"
] | 2022-02-18T13:06:45
| 2022-02-19T14:30:28
| 2022-02-18T15:41:33
|
NONE
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Dataset viewer issue for '*rubrix/news*'
**Link:https://huggingface.co/datasets/rubrix/news** *link to the dataset viewer page*
Cannot see the dataset preview:
```
Status code: 400
Exception: Status400Error
Message: Not found. Cache is waiting to be refreshed.
```
Am I the one who added this dataset ? No
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3755/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3755/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3754
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3754/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3754/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3754/events
|
https://github.com/huggingface/datasets/issues/3754
| 1,142,886,536
|
I_kwDODunzps5EHxCI
| 3,754
|
Overflowing indices in `select`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lvwerra",
"id": 8264887,
"login": "lvwerra",
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lvwerra",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Fixed on master (see https://github.com/huggingface/datasets/pull/3719).",
"Awesome, I did not find that one! Thanks."
] | 2022-02-18T11:30:52
| 2022-02-18T11:38:23
| 2022-02-18T11:38:23
|
MEMBER
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Describe the bug
The `Dataset.select` function seems to accept indices that are larger than the dataset size and seems to effectively use `index %len(ds)`.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"test": [1,2,3]})
ds = ds.select(range(5))
print(ds)
print()
print(ds["test"])
```
Result:
```python
Dataset({
features: ['test'],
num_rows: 5
})
[1, 2, 3, 1, 2]
```
This behaviour is not documented and can lead to unexpected behaviour when for example taking a sample larger than the dataset and thus creating a lot of duplicates.
## Expected results
It think this should throw an error or at least a very big warning:
```python
IndexError: Invalid key: 5 is out of bounds for size 3
```
## Environment info
- `datasets` version: 1.18.3
- Platform: macOS-12.0.1-x86_64-i386-64bit
- Python version: 3.9.10
- PyArrow version: 7.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lvwerra",
"id": 8264887,
"login": "lvwerra",
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lvwerra",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3754/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3754/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3753
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3753/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3753/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3753/events
|
https://github.com/huggingface/datasets/issues/3753
| 1,142,821,144
|
I_kwDODunzps5EHhEY
| 3,753
|
Expanding streaming capabilities
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lvwerra",
"id": 8264887,
"login": "lvwerra",
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lvwerra",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Related to: https://github.com/huggingface/datasets/issues/3444",
"Cool ! `filter` will be very useful. There can be a filter that you can apply on a streaming dataset:\r\n```python\r\nload_dataset(..., streaming=True).filter(lambda x: x[\"lang\"] == \"sw\")\r\n```\r\n\r\nOtherwise if you want to apply a filter on the source files that are going to be used for streaming, the logic has to be impIemented directly in the dataset script, or if there's no dataset script this can be done with pattern matching\r\n```python\r\nload_dataset(..., lang=\"sw\") # if the dataset script supports this parameter\r\nload_dataset(..., data_files=\"data/lang=sw/*\") # if there's no dataset script, but only data files\r\n```\r\n\r\n--------------\r\n\r\nHere are also some additional ideas of API to convert from iterable to map-style dataset:\r\n```python\r\non_disk_dataset = streaming_dataset.to_disk()\r\non_disk_dataset = streaming_dataset.to_disk(path=\"path/to/my/dataset/dir\")\r\n\r\nin_memory_dataset = streaming_dataset.take(100).to_memory() # to experiment without having to write files\r\n```\r\n--------------\r\n\r\nFinally regarding `push_to_hub`, we can replace `batch_size` by `shard_size` (same API as for on-disk datasets). The default is 500MB per file\r\n\r\nLet me know what you think !",
"Regarding conversion, I'd also ask for some kind of equivalent to `save_to_disk` for an `IterableDataset`.\r\n\r\nSimilarly to the streaming to hub idea, my use case would be to define a sequence of dataset transforms via `.map()`, using an `IterableDataset` as the input (so processing could start without doing whole download up-front), but streaming the resultant processed dataset just to disk.",
"That makes sense @athewsey , thanks for the suggestion :)\r\n\r\nMaybe instead of the `to_disk` we could simply have `save_to_disk` instead:\r\n```python\r\nstreaming_dataset.save_to_disk(\"path/to/my/dataset/dir\")\r\non_disk_dataset = load_from_disk(\"path/to/my/dataset/dir\")\r\n\r\nin_memory_dataset = Dataset.from_list(list(streaming_dataset.take(100))) # to experiment without having to write files\r\n```",
"Any updates on this?",
"So far are implemented: `IterableDataset.filter()` and `Dataset.to_iterable_dataset()`.\r\n\r\nStill missing: `IterableDataset.push_to_hub()` - though there is a hack to write on disk and then push to hub using\r\n\r\n```python\r\nds_on_disk = Dataset.from_generator(streaming_ds.__iter__) # stream to disk\r\nds_on_disk.push_to_hub(...)\r\n```"
] | 2022-02-18T10:45:41
| 2024-04-25T12:16:13
| null |
MEMBER
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
Some ideas for a few features that could be useful when working with large datasets in streaming mode.
## `filter` for `IterableDataset`
Adding filtering to streaming datasets would be useful in several scenarios:
- filter a dataset with many languages for a subset of languages
- filter a dataset for specific licenses
- other custom logic to get a subset
The only way to achieve this at the moment is I think through writing a custom loading script and implementing filters there.
## `IterableDataset` to `Dataset` conversion
In combination with the above filter a functionality to "play" the whole stream would be useful. The motivation is that often one might filter the dataset to get a manageable size for experimentation. In that case streaming mode is no longer necessary as the filtered dataset is small enough and it would be useful to be able to play through the whole stream to create a normal `Dataset` with all its benefits.
```python
ds = load_dataset("some_large_dataset", streaming=True)
ds_filter = ds.filter(lambda x: x["lang"]="fr")
ds_filter = ds_filter.stream() # here the `IterableDataset` is converted to a `Dataset`
```
Naturally, this could be expanded with `stream(n=1000)` which creates a `Dataset` with the first `n` elements similar to `take`.
## Stream to the Hub
While streaming allows to use a dataset as is without saving the whole dataset on the local machine it is currently not possible to process a dataset and add it to the hub. The only way to do this is by downloading the full dataset and saving the processed dataset again before pushing them to the hub. The API could looks something like:
```python
ds = load_dataset("some_large_dataset", streaming=True)
ds_filter = ds.filter(some_filter_func)
ds_processed = ds_filter.map(some_processing_func)
ds_processed.push_to_hub("new_better_dataset", batch_size=100_000)
```
Under the hood this could be done by processing and aggregating `batch_size` elements and then pushing that batch as a single file to the hub. With this functionality one could process and create TB scale datasets while only requiring size of `batch_size` local disk space.
cc @lhoestq @albertvillanova
| null |
{
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3753/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3753/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3750
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3750/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3750/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3750/events
|
https://github.com/huggingface/datasets/issues/3750
| 1,142,408,331
|
I_kwDODunzps5EF8SL
| 3,750
|
`NonMatchingSplitsSizesError` for cats_vs_dogs dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jaketae",
"id": 25360440,
"login": "jaketae",
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"repos_url": "https://api.github.com/users/jaketae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jaketae",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thnaks for reporting @jaketae. We are fixing it. "
] | 2022-02-18T05:46:39
| 2022-02-18T14:56:11
| 2022-02-18T14:56:11
|
CONTRIBUTOR
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
## Describe the bug
Cannot download cats_vs_dogs dataset due to `NonMatchingSplitsSizesError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cats_vs_dogs")
```
## Expected results
Loading is successful.
## Actual results
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=7503250, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=7262410, num_examples=23410, dataset_name='cats_vs_dogs')}]
```
## Environment info
Reproduced on a fresh [Colab notebook](https://colab.research.google.com/drive/13GTvrSJbBGvL2ybDdXCBZwATd6FOkMub?usp=sharing).
## Additional Context
Originally reported in https://github.com/huggingface/transformers/issues/15698.
cc @mariosasko
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3750/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3750/timeline
| null |
completed
| null | null | false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.