CoSHE-Eval / README.md
upperwal's picture
Update README.md
0a2cee7 verified
metadata
task_categories:
  - automatic-speech-recognition
  - audio-classification
language:
  - hi
  - en
size_categories:
  - 1K<n<10K
license: cc-by-nc-4.0

🎙️ CoSHE-Eval: A Code-Switching ASR Benchmark for Hindi–English Speech

🧠 Overview

CoSHE-Eval is an evaluation dataset curated for testing Automatic Speech Recognition (ASR) systems on Hindi-English code-mixed speech.It focuses on bilingual conversational contexts commonly found in India, where Hindi (in Devanagari) and English (in Latin script) co-occur naturally within the same utterance.

Detailed Blog: CoSHE-Eval Blog


Technical Specifications

Attribute Description
Total Samples 1985
Total Duration ~30 hours
Minimum Segment Length 0.60 seconds
Maximum Segment Length 59.8 seconds
Mean Segment Length 53.3 seconds
Median Segment Length 56.9 seconds
Timestamp Validation Incremental and aligned with audio duration
Speaker Segmentation Maintains full utterances; no mid-sentence cuts

📂 Dataset Structure

Column Description
audio_file_name Unique name or ID of the audio sample
transcription Verified ground-truth transcription (Hindi-English code-mixed)
audio The corresponding audio waveform

All audio files are provided in .wav format and perfectly aligned with their corresponding transcriptions.


⚙️ Example: Computing Word Error Rate (WER)

Below is an example comparing a ground truth transcript with a test model transcript to compute the Word Error Rate (WER):

import evaluate

# Ground truth vs test model sentences
reference_text = "आज मैंने new laptop खरीदा और performance बहुत अच्छी है"
predicted_text = "आज मैंने new laptop लिया और performance अच्छी है"

print("Ground Truth Transcript:\n", reference_text)
print("\nTest Model Transcript:\n", predicted_text)

# Compute WER
wer_metric = evaluate.load("wer")
wer_score = wer_metric.compute(predictions=[predicted_text], references=[reference_text])
print(f"\nWord Error Rate (WER): {wer_score:.3f}")

Output:

Ground Truth Transcript:
आज मैंने new laptop खरीदा और performance बहुत अच्छी है

Test Model Transcript:
आज मैंने new laptop लिया और performance अच्छी है

Word Error Rate (WER): 0.2

This demonstrates how the Ground Truth 10 dataset can be used to quantitatively assess ASR model accuracy using standard evaluation metrics such as WER.

🚀 Usage Example You can load the dataset directly using the Hugging Face datasets library:


from datasets import load_dataset

dataset = load_dataset("soketlabs/CoSHE-Eval", split="eval")
print(dataset[0])

🧠 About Soket AI

Soket AI is a deep-tech AI research and innovation company committed to advancing sovereign, ethical, and inclusive artificial intelligence.Our mission is to build cutting-edge AI systems that empower industries, researchers, and citizens alike — spanning domains such as speech recognition, defense, healthcare, education, and Indic language intelligence.

At Soket AI, we believe in AI made for people, by people, fostering trust, transparency, and accessibility at every layer.

Learn more: https://soket.ai/

🏛️ About Project EKΛ

Project EKΛ (pronounced Eka, meaning “One” in Sanskrit) is India’s bold leap toward sovereign, inclusive intelligence — crafting foundational AI that speaks every language, reflects every culture, and empowers every citizen.Rooted in our diversity and driven by innovation, EKΛ is building the world’s most humane and multilingual AI — made in India, for a wiser world.At its heart lies a 120-billion-parameter multilingual foundation model — a state-of-the-art large language model (LLM) engineered to understand and generate content across all major Indic languages, English, and their code-mixed variants.

Join the initiative: https://eka.soket.ai/

💬 Contact

For any queries, collaborations, or feedback related to this dataset, please reach out via:

📧 Email: [email protected]