You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

QuadVoxBench: A Large-Scale Fine-Grained Benchmark for Robust Audio Deepfake Detection

Dataset Description

QuadVoxBench is a large-scale (392+ hours) audio benchmark dataset designed for the robust evaluation of deepfake detection systems. It is structured around four key aspects of audio variation: Speech Style, Emotional Prosody, Acoustic Environment, and Manipulation Type. The dataset features a diverse collection of real and synthetically generated audio in English and Chinese, created using a comprehensive toolkit of modern text-to-speech (TTS) and voice conversion (VC) models.

For more details, please refer to our main repository:

Dataset Structure

The dataset is organized into 11 subsets, each corresponding to a specific domain or speech characteristic. Each subset contains real and fake audio samples, along with metadata.

β”œβ”€β”€ Audiobook/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta_test.json
β”œβ”€β”€ Emotional/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta_test.json
β”œβ”€β”€ Interview/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta_test.json
β”œβ”€β”€ Movie/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta_test.json
β”œβ”€β”€ News/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta_test.json
β”œβ”€β”€ NoisySpeech/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta_test.json
β”œβ”€β”€ PartialFake/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta.json
β”œβ”€β”€ PhoneCall/
β”‚   β”œβ”€β”€ en/
β”‚   β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   β”‚   └── fake/
β”‚   β”‚   └── meta_test.json
β”‚   └── zh-cn/
β”‚       β”œβ”€β”€ audio/
β”‚       β”‚   β”œβ”€β”€ real/
β”‚       β”‚   └── fake/
β”‚       └── meta_test.json
β”œβ”€β”€ Podcast/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta_test.json
β”œβ”€β”€ PublicFigure/
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ real/
β”‚   β”‚   └── fake/
β”‚   └── meta_test.json
└── PublicSpeech/
    β”œβ”€β”€ audio/
    β”‚   β”œβ”€β”€ real/
    β”‚   └── fake/
    └── meta_test.json

Citation

If you use QuadVoxBench in your research, please cite:

@inproceedings{quadvox2026,
  title={QuadVox: A Large-Scale Fine-Grained Benchmark with Relative Audio Proximity Test for Robust Audio Deepfake Detection},
  author={Ruiming Wang, et al.},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2026}
}
Downloads last month
165