html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings sequence |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2945 | Protect master branch | @lhoestq now the 2 are implemented.
Please note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to... | After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propo... | 64 | Protect master branch
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f... | [
-0.1553196907,
-0.1002302244,
-0.0703211576,
-0.0798171014,
-0.1042536274,
-0.1879872978,
0.010403702,
0.2728639543,
-0.0098490976,
-0.0841396898,
0.2926149666,
-0.0778749511,
-0.1324568242,
0.2158906311,
-0.0562291816,
0.1705456823,
0.2003229409,
-0.0407976881,
-0.0923327506,
... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.
To avoid other users from having this issue we could make the caching differentiate the two, what do you think ? | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 50 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... | [
-0.3049952984,
0.124387905,
-0.0465586074,
0.2368963808,
0.1517290622,
-0.0666599274,
-0.0069408538,
0.3285394311,
0.173739031,
0.0293815993,
-0.2313866317,
0.3581927419,
-0.1503304243,
0.2527367473,
-0.2977858186,
-0.1378118992,
0.0444690213,
0.0321245417,
-0.0614950396,
0.166... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests. | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 28 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... | [
-0.3049952984,
0.124387905,
-0.0465586074,
0.2368963808,
0.1517290622,
-0.0666599274,
-0.0069408538,
0.3285394311,
0.173739031,
0.0293815993,
-0.2313866317,
0.3581927419,
-0.1503304243,
0.2527367473,
-0.2977858186,
-0.1378118992,
0.0444690213,
0.0321245417,
-0.0614950396,
0.166... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 22 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... | [
-0.3049952984,
0.124387905,
-0.0465586074,
0.2368963808,
0.1517290622,
-0.0666599274,
-0.0069408538,
0.3285394311,
0.173739031,
0.0293815993,
-0.2313866317,
0.3581927419,
-0.1503304243,
0.2527367473,
-0.2977858186,
-0.1378118992,
0.0444690213,
0.0321245417,
-0.0614950396,
0.166... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | I just merged a fix, let me know if you're still having this kind of issues :)
We'll do a release soon to make this fix available | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 27 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... | [
-0.3049952984,
0.124387905,
-0.0465586074,
0.2368963808,
0.1517290622,
-0.0666599274,
-0.0069408538,
0.3285394311,
0.173739031,
0.0293815993,
-0.2313866317,
0.3581927419,
-0.1503304243,
0.2527367473,
-0.2977858186,
-0.1378118992,
0.0444690213,
0.0321245417,
-0.0614950396,
0.166... |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Hi @daqieq, thanks for reporting.
Unfortunately, I was not able to reproduce this bug:
```ipython
In [1]: from datasets import load_dataset
...: ds = load_dataset('wiki_bio')
Downloading: 7.58kB [00:00, 26.3kB/s]
Downloading: 2.71kB [00:00, ?B/s]
Using custom data configuration default
Downloading and prep... | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any er... | 109 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_datas... | [
-0.2289306819,
0.3834536374,
0.0403455123,
0.2550398707,
-0.0121114748,
0.2622570992,
0.5036686659,
0.1329528391,
0.3442815542,
0.1357177198,
-0.1089740172,
0.0772838667,
0.073150292,
-0.0056993067,
-0.1256227195,
0.0789284036,
0.0364450403,
-0.0053883456,
0.2149499953,
0.10395... |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.
Running on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename an... | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any er... | 194 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_datas... | [
-0.2289306819,
0.3834536374,
0.0403455123,
0.2550398707,
-0.0121114748,
0.2622570992,
0.5036686659,
0.1329528391,
0.3442815542,
0.1357177198,
-0.1089740172,
0.0772838667,
0.073150292,
-0.0056993067,
-0.1256227195,
0.0789284036,
0.0364450403,
-0.0053883456,
0.2149499953,
0.10395... |
https://github.com/huggingface/datasets/issues/2934 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows | I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution... | To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one refe... | 99 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="lab... | [
0.0456906855,
0.3345080316,
0.1148889512,
0.0804506093,
0.2304305434,
0.1188877076,
0.3988847733,
0.2548753023,
-0.1099168733,
0.2674389482,
-0.3353856802,
0.4040720761,
-0.1644808352,
-0.1074624807,
-0.0342663452,
-0.1066429764,
0.0511107743,
0.145164758,
-0.1666639894,
-0.122... |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Hi, the filename here is less than 255
```python
>>> len("_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock")
154
```
so not sure why it's considered too long for your filesystem.
(also note that the lock file... | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc... | 39 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53... | [
0.0493387729,
0.0920009762,
-0.0714941099,
0.401848197,
0.4151671231,
0.2358948588,
0.6366623044,
0.2435027659,
0.2364588678,
0.2313295007,
0.0185687691,
0.0077437051,
-0.1442687213,
-0.3100363016,
-0.2015361935,
-0.244314611,
-0.158109054,
-0.0416219905,
-0.1770785898,
0.21771... |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system be... | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc... | 67 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53... | [
0.0493387729,
0.0920009762,
-0.0714941099,
0.401848197,
0.4151671231,
0.2358948588,
0.6366623044,
0.2435027659,
0.2364588678,
0.2313295007,
0.0185687691,
0.0077437051,
-0.1442687213,
-0.3100363016,
-0.2015361935,
-0.244314611,
-0.158109054,
-0.0416219905,
-0.1770785898,
0.21771... |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Hi @SBrandeis, thanks for reporting! ^^
I think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389
I will ask them if they are planning to fix it... | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_... | 26 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```... | [
-0.3864648342,
-0.2204954028,
0.0894812346,
0.4387411475,
0.2157088816,
0.1229232103,
-0.0369099304,
0.3093880415,
0.2422772944,
0.0955025926,
-0.2264312804,
0.4261532128,
-0.0641759783,
0.3035329282,
-0.0094657149,
-0.2584783137,
-0.0045060329,
0.2337660491,
-0.1049253345,
0.1... |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 66