Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      Invalid string class label SQuTR@2ee35db586f9013cb98114b2e0db9661c205c4f6
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2092, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2197, in cast_table_to_features
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1995, in cast_array_to_feature
                  return feature.cast_storage(array)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1169, in cast_storage
                  [self._strval2int(label) if label is not None else None for label in storage.to_pylist()]
                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1098, in _strval2int
                  raise ValueError(f"Invalid string class label {value}")
              ValueError: Invalid string class label SQuTR@2ee35db586f9013cb98114b2e0db9661c205c4f6

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

SQuTR: A Robustness Benchmark for Spoken Query to Text Retrieval

GitHub

SQuTR (Spoken Query-to-Text Retrieval) is a large-scale bilingual benchmark designed to evaluate the robustness of information retrieval systems under realistic acoustic perturbations.

While speech interaction is becoming a primary interface for IR systems, performance often degrades significantly in noisy environments. SQuTR provides a standardized framework featuring 37,317 complex queries across 6 domains, synthesized with 200 real speakers, and evaluated under 4 graded noise levels.


🌟 Key Features

  • Bilingual & Multi-Domain: Includes 6 subsets from MTEB and C-MTEB covering Wikipedia, Finance, Medical, and Encyclopedia domains.
  • High-Fidelity Synthesis: Generated using CosyVoice-3 with diverse speaker profiles, totaling 190.4 hours of audio.
  • Robustness Evaluation: Explicitly models four acoustic conditions: Clean, Low Noise (20dB), Medium Noise (10dB), and High Noise (0dB).
  • MTEB Compatibility: Follows standard JSONL/BEIR formatting for seamless integration into modern retrieval pipelines.

πŸ“‚ Dataset Structure

The dataset is organized by language and subset. Each subset (e.g., fiqa) contains the original text documents and the synthesized audio queries under different SNR conditions.

SQuTR/
└── source_data/
    β”œβ”€β”€ en/ (English Datasets: fiqa, hotpotqa, nq)
    β”‚   └── [subset_name]/
    β”‚       β”œβ”€β”€ audio_clean/              # Clean original audio files (.wav)
    β”‚       β”œβ”€β”€ audio_noise_snr_0/        # Audio with 0dB Signal-to-Noise Ratio
    β”‚       β”œβ”€β”€ audio_noise_snr_10/       # Audio with 10dB Signal-to-Noise Ratio
    β”‚       β”œβ”€β”€ audio_noise_snr_20/       # Audio with 20dB Signal-to-Noise Ratio
    β”‚       β”œβ”€β”€ qrels/                    # Query relevance judgments (TSV/JSONL)
    β”‚       β”œβ”€β”€ corpus.jsonl              # Text corpus documents
    β”‚       β”œβ”€β”€ queries.jsonl             # Original text queries
    β”‚       β”œβ”€β”€ queries_with_audio_clean.jsonl         # Metadata mapping text to clean audio
    β”‚       β”œβ”€β”€ queries_with_audio_noise_snr_0.jsonl   # Metadata for 0dB noise queries
    β”‚       β”œβ”€β”€ queries_with_audio_noise_snr_10.jsonl  # Metadata for 10dB noise queries
    β”‚       └── queries_with_audio_noise_snr_20.jsonl  # Metadata for 20dB noise queries
    └── zh/ (Chinese Datasets: DuRetrieval, MedicalRetrieval, T2Retrieval)
        └── [subset_name]/
            └── (Same structure as above)

πŸ’Ύ How to Use the Dataset

You can download the dataset directly from this Hugging Face repository. To use the evaluation scripts, please refer to our GitHub Repository.

Downloads last month
7