Carrier-Agnostic Speaker Verification Evaluation
Collection
Benchmark for evaluating speaker verification robustness across codecs, microphones, noise, and playback chains.
•
2 items
•
Updated
Carrier-Agnostic Speaker Embedding Benchmark - A comprehensive benchmark for evaluating speaker verification systems under real-world acoustic carrier conditions.
CASE Benchmark tests speaker embedding models across 24 protocols covering:
from case_benchmark.download import download_benchmark
# Download full benchmark (~3.1GB compressed)
download_benchmark("./benchmark")
# Download only specific conditions
download_benchmark("./benchmark", conditions=["clean", "codec"])
# Download only VoxCeleb1-O dataset
download_benchmark("./benchmark", datasets=["voxceleb1_o"])
Audio files are stored as compressed archives per condition:
audio/
├── voxceleb1_o_clean.tar.gz # 69MB - 400 files
├── voxceleb1_o_codec.tar.gz # 464MB - 2,800 files
├── voxceleb1_o_mic.tar.gz # 467MB - 2,800 files
├── voxceleb1_o_noise.tar.gz # 346MB - 2,000 files
├── voxceleb1_o_reverb.tar.gz # 71MB - 400 files
├── voxceleb1_o_playback.tar.gz # 205MB - 1,200 files
├── librispeech_clean.tar.gz # 63MB - 392 files
├── librispeech_codec.tar.gz # 426MB - 2,744 files
├── librispeech_mic.tar.gz # 430MB - 2,744 files
├── librispeech_noise.tar.gz # 331MB - 1,960 files
├── librispeech_reverb.tar.gz # 67MB - 392 files
└── librispeech_playback.tar.gz # 196MB - 1,176 files
trials/
├── clean_clean.txt
├── clean_codec_*.txt # 7 codec protocols
├── clean_mic_*.txt # 7 microphone protocols
├── clean_noise_*.txt # 5 noise protocols
├── clean_reverb.txt
└── clean_playback_*.txt # 3 playback chain protocols
After extraction, audio files are organized as:
voxceleb1_o/
├── clean/{speaker_id}/utt_*.wav
├── codec/{codec_type}/{speaker_id}/utt_*.wav
├── mic/{mic_type}/{speaker_id}/utt_*.wav
├── noise/{snr_level}/{speaker_id}/utt_*.wav
├── reverb/{speaker_id}/utt_*.wav
└── playback/{chain_type}/{speaker_id}/utt_*.wav
Each trial file contains lines in the format:
<label> <enrollment_path> <test_path>
Where:
label: 1 for same speaker, 0 for different speakerenrollment_path: Path to enrollment audio (always clean)test_path: Path to test audio (condition-dependent)| Dataset | Speakers | Utterances | Source |
|---|---|---|---|
| VoxCeleb1-O | 40 | 400 clean | VoxCeleb1 test set |
| LibriSpeech | 40 | 392 clean | LibriSpeech test-clean |
| Rank | Model | Absolute EER | Degradation | Clean EER |
|---|---|---|---|---|
| 1 | WeSpeaker ResNet34 | 3.01% | +2.43% | 0.58% |
| 2 | SpeechBrain ECAPA-TDNN | 3.05% | +2.49% | 0.56% |
| 3 | CASE HF v2-512 | 3.53% | +2.31% | 1.22% |
| 4 | NeMo TitaNet-L | 4.05% | +3.39% | 0.66% |
| 5 | pyannote Embedding | 4.47% | +2.79% | 1.68% |
| 6 | Resemblyzer | 10.49% | +5.65% | 4.84% |
See full results for detailed per-protocol breakdowns.
| Resource | Description | Link |
|---|---|---|
| CASE HF v2-512 | Carrier-agnostic speaker embedding model | HuggingFace Model |
| Benchmark Code | Evaluation scripts and tools | GitHub |
| Metrics Guide | How to interpret Clean EER, Degradation Factor | Metrics Documentation |
| Submission Guide | How to submit your model to the leaderboard | Submission Guide |
@misc{case-benchmark-2026,
title={CASE Benchmark: Carrier-Agnostic Speaker Embedding Evaluation},
author={Gitter, Ben},
year={2026},
url={https://github.com/gittb/case-benchmark}
}