Papers
arxiv:2605.01018

WildTableBench: Benchmarking Multimodal Foundation Models on Table Understanding In the Wild

Published on May 1
· Submitted by
HJZ
on May 15
Authors:
,
,
,
,
,
,
,

Abstract

WildTableBench is introduced as the first question-answering benchmark for real-world table images, revealing significant challenges in structural perception and numerical reasoning for existing multimodal models.

AI-generated summary

Using multimodal foundation models to analyze table images is a high-value yet challenging application in consumer and enterprise scenarios. Despite its importance, current evaluations rely largely on structured-text tables or clean rendered images, leaving the visual complexity of in-the-wild table images underexplored. Such images feature varied layouts and diverse domains that demand sophisticated structural perception and numerical reasoning. To bridge this gap, we introduce WildTableBench, the first question-answering benchmark for naturally occurring table images from real-world settings. WildTableBench comprises 402 high-information-density table images collected from online forums and websites across diverse domains, together with 928 manually annotated and verified questions spanning 17 subtypes across five categories. We evaluate 21 frontier proprietary and open-source multimodal foundation models on this benchmark. Only one model exceeds 50% accuracy, while all remaining models range from 4.1% to 49.9%. We further conduct diagnostic analyses to characterize model failures and reveal persistent weaknesses in structural perception and reasoning. These results and analyses provide useful insights into current model capabilities and establish WildTableBench as a valuable diagnostic benchmark for table image understanding.

Community

Paper author Paper submitter

We introduce WildTableBench, the first QA benchmark for evaluating multimodal foundation models on naturally occurring table images collected from real-world web sources (Reddit, Pinterest, etc.). Unlike prior benchmarks built on structured text or clean rendered images, WildTableBench features 402 real-world table images (screenshots, scans, and photos) with 928 manually annotated questions across 17 subtypes in 5 categories — covering numerical reasoning, fact verification, cell locating, hypothetical reasoning, and color-based reasoning.
We evaluate 21 frontier models (GPT-5.2, Gemini-3-Pro, Claude Sonnet 4.6, Qwen3-VL, Kimi K2.5, GLM-4.6V, etc.). The best model (Gemini-3-Pro) achieves only 67.9% accuracy; all others score below 50%. WildTableBench reveals persistent gaps in structural perception and reasoning that existing benchmarks miss, establishing it as a rigorous diagnostic tool for real-world table understanding.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.01018 in a model README.md to link it from this page.

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.01018 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.