Datasets:
Dataset broken by latest update?
#27
by
Rijgersberg
- opened
I think the latest update to the dataset broke the generation of the test set of at least nld_Latn and fry_Latn.
from datasets import load_dataset
from pprint import pprint
dataset = load_dataset('HuggingFaceFW/finepdfs', 'fry_Latn', split='train')
print(len(dataset))
for row in dataset.select(range(10)):
pprint(row)
Gives me an error:
Generating train split: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5459/5459 [00:00<00:00, 56453.39 examples/s]
Generating test split: 0%| | 0/47 [00:00<?, ? examples/s]
Failed to read file '/path/to/my/.cache/huggingface/hub/datasets--HuggingFaceFW--finepdfs/snapshots/d388ccc2206e1de7e3daefeb11928e735d02ce56/data/fry_Latn/test/000_00000.parquet' with error CastError: Couldn't cast
text: string
id: string
dump: string
url: string
date: string
file_path: string
offset: int64
token_count: int64
language: string
page_average_lid: string
page_average_lid_score: double
full_doc_lid: string
full_doc_lid_score: double
per_page_languages: list<element: string>
child 0, element: string
is_truncated: bool
extractor: dictionary<values=string, indices=int8, ordered=0>
page_ends: list<element: int64>
child 0, element: int64
fw_edu_scores: list<element: double>
child 0, element: double
fw_edu_v2_score: list<element: double>
child 0, element: double
dclm_scores: list<element: double>
child 0, element: double
ocr_quality_scores: list<element: double>
child 0, element: double
minhash_cluster_size: int64
duplicate_count: int64
-- schema metadata --
extractor_categories: 'docling,rolmOCR'
to
{'text': Value('string'), 'id': Value('string'), 'dump': Value('string'), 'url': Value('string'), 'date': Value('string'), 'file_path': Value('string'), 'offset': Value('int64'), 'token_count': Value('int64'), 'language': Value('string'), 'page_average_lid': Value('string'), 'page_average_lid_score': Value('float64'), 'full_doc_lid': Value('string'), 'full_doc_lid_score': Value('float64'), 'per_page_languages': List(Value('string')), 'is_truncated': Value('bool'), 'extractor': Value('string'), 'page_ends': List(Value('int64'))}
because column names don't match
Generating test split: 0%| | 0/47 [00:00<?, ? examples/s]
Traceback (most recent call last):
[...]
Ironically, this also triggered when the train set is loaded. I would try the previous revision of the dataset but it was super-squashed, so I can't.
If it is relevant, I'm using datasets==4.4.1 but saw the behaviour on 4.1 too.
Taking a look ๐
Did you manage to find anything? :)
Yup, I kept the same parquet schema for all langauges, despite some not containing all the required metadata. I am currently re-uploading to fix that!
hynky
changed discussion status to
closed
Should be fixed now
hynky
changed discussion status to
open
hynky
changed discussion status to
closed
Thanks, it works now!