QuanBench+: A Unified Multi-Framework Benchmark for LLM-Based Quantum Code Generation
Abstract
QuanBench+ evaluates large language models on quantum code generation across multiple frameworks using functional testing and repair-based feedback, revealing significant progress but persistent dependence on framework-specific knowledge.
Large Language Models (LLMs) are increasingly used for code generation, yet quantum code generation is still evaluated mostly within single frameworks, making it difficult to separate quantum reasoning from framework familiarity. We introduce QuanBench+, a unified benchmark spanning Qiskit, PennyLane, and Cirq, with 42 aligned tasks covering quantum algorithms, gate decomposition, and state preparation. We evaluate models with executable functional tests, report Pass@1 and Pass@5, and use KL-divergence-based acceptance for probabilistic outputs. We additionally study Pass@1 after feedback-based repair, where a model may revise code after a runtime error or wrong answer. Across frameworks, the strongest one-shot scores reach 59.5% in Qiskit, 54.8% in Cirq, and 42.9% in PennyLane; with feedback-based repair, the best scores rise to 83.3%, 76.2%, and 66.7%, respectively. These results show clear progress, but also that reliable multi-framework quantum code generation remains unsolved and still depends strongly on framework-specific knowledge.
Community
QuanBench+ is a new benchmark for LLM-based quantum code generation across multiple frameworks.
Our goal is to make evaluation more unified, reproducible, and meaningful with canonical solutions, pass@k analysis, and KL-based output checking.
Would love feedback from people working on code generation, evaluation, and quantum ML.
Interesting breakdown of this paper on arXivLens: https://arxivlens.com/PaperView/Details/quanbench-a-unified-multi-framework-benchmark-for-llm-based-quantum-code-generation-2511-50857487
Covers the executive summary, detailed methodology, and practical applications.
Would be interesting to see how well would the newer anthropic, open ai, qwen models fare in the this benchmark!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Revisiting Quantum Code Generation: Where Should Domain Knowledge Live? (2026)
- How Many Tries Does It Take? Iterative Self-Repair in LLM Code Generation Across Model Scales and Benchmarks (2026)
- Algorithm-Based Pipeline for Reliable and Intent-Preserving Code Translation with LLMs (2026)
- Does Teaming-Up LLMs Improve Secure Code Generation? A Comprehensive Evaluation with Multi-LLMSecCodeEval (2026)
- Generative AI for Quantum Circuits and Quantum Code: A Technical Review and Taxonomy (2026)
- VIBEPASS: Can Vibe Coders Really Pass the Vibe Check? (2026)
- Synthesis-in-the-Loop Evaluation of LLMs for RTL Generation: Quality, Reliability, and Failure Modes (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.08570 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper