Abstract
Analysis of agentic system evaluation reveals significant variance in single-run performance estimates, necessitating multiple runs and advanced metrics for reliable assessment.
Agentic systems are evaluated on benchmarks where agents interact with environments to solve tasks. Most papers report a pass@1 score computed from a single run per task, assuming this gives a reliable performance estimate. We test this assumption by collecting 60,000 agentic trajectories on SWE-Bench-Verified, spanning three models and two scaffolds. We find substantial variance: single-run pass@1 estimates vary by 2.2 to 6.0 percentage points depending on which run is selected, with standard deviations exceeding 1.5 percentage points even at temperature 0. This variance has critical implications: reported improvements of 2--3 percentage points may reflect evaluation noise rather than genuine algorithmic progress. Through token-level analysis, we show that trajectories diverge early, often within the first few percent of tokens, and that these small differences cascade into different solution strategies. To enable reliable evaluation of agentic systems, we recommend three concrete practices: (1) estimate pass@1 from multiple independent runs per task, especially when measuring small improvements, (2) use statistical power analysis to determine the number of runs needed to detect expected effect sizes, and (3) consider metrics like pass@k (optimistic bound) and pass^k (pessimistic bound) with k>1 to better characterize the full performance envelope. While these practices increase evaluation cost, they are essential for distinguishing genuine scientific progress from statistical noise.
Community
We just published a paper quantifying a problem the AI community has been quietly ignoring: single-run benchmark evaluations are far noisier than most people realize. And the decisions they inform — which model to deploy, which research direction to fund, which tool to ship — may not be supported by the evidence.
We found that SWE-Bench-Verified scores can vary by 2.2 to 6.0 percentage points, making small improvements hard to distinguish from noise.
Read more at: https://arxiv.org/abs/2602.07150
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SWE-RM: Execution-free Feedback For Software Engineering Agents (2025)
- AgencyBench: Benchmarking the Frontiers of Autonomous Agents in 1M-Token Real-World Contexts (2026)
- The Necessity of a Unified Framework for LLM-Based Agent Evaluation (2026)
- On the Uncertainty of Large Language Model-Based Multi-Agent Systems (2026)
- Training Versatile Coding Agents in Synthetic Environments (2025)
- ARTIS: Agentic Risk-Aware Test-Time Scaling via Iterative Simulation (2026)
- MAS-ProVe: Understanding the Process Verification of Multi-Agent Systems (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper