Files changed (6) hide show
  1. CLAUDE.md +79 -421
  2. README.md +3 -92
  3. dots-mocr.py → dots-ocr-1.5.py +60 -95
  4. firered-ocr.py +7 -10
  5. glm-ocr-bucket.py +0 -364
  6. qianfan-ocr.py +0 -628
CLAUDE.md CHANGED
@@ -3,17 +3,10 @@
3
  ## Active Scripts
4
 
5
  ### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`)
6
- ✅ **Production Ready** (Fixed 2026-02-12)
7
- - Uses official vLLM offline pattern: `llm.generate()` with PIL images
8
- - `NGramPerReqLogitsProcessor` prevents repetition on complex documents
9
- - Resolution modes removed (handled by vLLM's multimodal processor)
10
- - See: https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-OCR.html
11
-
12
- **Known issue (vLLM nightly, 2026-02-12):** Some images trigger a crop dimension validation error:
13
- ```
14
- ValueError: images_crop dim[2] expected 1024, got 640. Expected shape: ('bnp', 3, 1024, 1024), but got torch.Size([0, 3, 640, 640])
15
- ```
16
- This is a vLLM bug: the preprocessor defaults to gundam mode (image_size=640), but the tensor validator expects 1024x1024 even when the crop batch is empty (dim 0). Hit 2/10 on `davanstrien/ufo-ColPali`, 0/10 on NLS Medical History. Likely depends on image aspect ratios. No upstream issue filed yet. Related feature request: [vllm#28160](https://github.com/vllm-project/vllm/issues/28160) (no way to control resolution mode via mm-processor-kwargs).
17
 
18
  ### LightOnOCR-2-1B (`lighton-ocr2.py`)
19
  ✅ **Production Ready** (Fixed 2026-01-29)
@@ -82,117 +75,90 @@ hf jobs uv run --flavor l4x1 \
82
  - Backend: Transformers (single image processing)
83
  - Requires: `transformers>=5.0.0`
84
 
85
- ### DoTS.ocr-1.5 (`dots-ocr-1.5.py`)
86
- ✅ **Production Ready** (Fixed 2026-03-14)
87
-
88
- **Status:** Working with vLLM 0.17.1 stable
89
-
90
- **Model availability:** The v1.5 model is NOT on HF from the original authors. We mirrored it from ModelScope to `davanstrien/dots.ocr-1.5`. Original: https://modelscope.cn/models/rednote-hilab/dots.ocr-1.5. License: MIT-based (with supplementary terms for responsible use).
91
-
92
- **Key fix (2026-03-14):** Must pass `chat_template_content_format="string"` to `llm.chat()`. The model's `tokenizer_config.json` chat template expects string content (not openai-format lists). Without this, the model generates empty output (~1 token then EOS). The separate `chat_template.json` file handles multimodal content but vLLM uses the tokenizer_config template by default.
93
 
94
- **Bbox coordinate system (layout modes):**
95
- Bounding boxes from `layout-all` and `layout-only` modes are in the **resized image coordinate space**, not original image coordinates. The model uses `Qwen2VLImageProcessor` which resizes images via `smart_resize()`:
96
- - `max_pixels=11,289,600`, `factor=28` (patch_size=14 × merge_size=2)
97
- - Images are scaled down so `w×h ≤ max_pixels`, dims rounded to multiples of 28
98
- - To map bboxes back to original image coordinates:
99
- ```python
100
- import math
101
-
102
- def smart_resize(height, width, factor=28, min_pixels=3136, max_pixels=11289600):
103
- h_bar = max(factor, round(height / factor) * factor)
104
- w_bar = max(factor, round(width / factor) * factor)
105
- if h_bar * w_bar > max_pixels:
106
- beta = math.sqrt((height * width) / max_pixels)
107
- h_bar = math.floor(height / beta / factor) * factor
108
- w_bar = math.floor(width / beta / factor) * factor
109
- elif h_bar * w_bar < min_pixels:
110
- beta = math.sqrt(min_pixels / (height * width))
111
- h_bar = math.ceil(height * beta / factor) * factor
112
- w_bar = math.ceil(width * beta / factor) * factor
113
- return h_bar, w_bar
114
-
115
- resized_h, resized_w = smart_resize(orig_h, orig_w)
116
- scale_x = orig_w / resized_w
117
- scale_y = orig_h / resized_h
118
- # Then: orig_x = bbox_x * scale_x, orig_y = bbox_y * scale_y
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
119
  ```
120
-
121
- **Test results (2026-03-14):**
122
- - 3/3 samples on L4: OCR mode working, ~147 toks/s output
123
- - 3/3 samples on L4: layout-all mode working, structured JSON with bboxes
124
- - 10/10 samples on A100: layout-only mode on NLS Highland News, ~670 toks/s output
125
- - Output datasets: `davanstrien/dots-ocr-1.5-smoke-test-v3`, `davanstrien/dots-ocr-1.5-layout-test`, `davanstrien/dots-ocr-1.5-nls-layout-test`
126
-
127
- **Prompt modes:**
128
- - `ocr` — text extraction (default)
129
- - `layout-all` — layout + bboxes + categories + text (JSON)
130
- - `layout-only` — layout + bboxes + categories only (JSON)
131
- - `web-parsing` — webpage layout analysis (JSON) [new in v1.5]
132
- - `scene-spotting` — scene text detection [new in v1.5]
133
- - `grounding-ocr` — text from bounding box region [new in v1.5]
134
- - `general` — free-form (use with `--custom-prompt`) [new in v1.5]
135
-
136
- **Example usage:**
137
- ```bash
138
- hf jobs uv run --flavor l4x1 \
139
- -s HF_TOKEN \
140
- /path/to/dots-ocr-1.5.py \
141
- davanstrien/ufo-ColPali output-dataset \
142
- --model davanstrien/dots.ocr-1.5 \
143
- --max-samples 10 --shuffle --seed 42
144
  ```
145
 
146
- **Model Info:**
147
- - Original: `rednote-hilab/dots.ocr-1.5` (ModelScope only)
148
- - Mirror: `davanstrien/dots.ocr-1.5` (HF)
149
- - Parameters: 3B (1.2B vision encoder + 1.7B language model)
150
- - Architecture: DotsOCRForCausalLM (custom code, trust_remote_code required)
151
- - Precision: BF16
152
- - GitHub: https://github.com/rednote-hilab/dots.ocr
153
-
154
- ---
155
-
156
- ## Pending Development
157
-
158
- ### DeepSeek-OCR-2 (`deepseek-ocr2-vllm.py`)
159
- ✅ **Production Ready** (2026-02-12)
160
-
161
- **Status:** Working with vLLM nightly (requires nightly for `DeepseekOCR2ForCausalLM` support, not yet in stable 0.15.1)
162
-
163
- **What was done:**
164
- - Rewrote the broken draft script (which used base64/llm.chat/resolution modes)
165
- - Uses the same proven pattern as v1: `llm.generate()` with PIL images + `NGramPerReqLogitsProcessor`
166
- - Key v2 addition: `limit_mm_per_prompt={"image": 1}` in LLM init
167
- - Added `addict` and `matplotlib` as dependencies (required by model's HF custom code)
168
-
169
- **Test results (2026-02-12):**
170
- - 10/10 samples processed successfully on L4 GPU
171
- - Processing time: 6.4 min (includes model download + warmup)
172
- - Model: 6.33 GiB, ~475 toks/s input, ~246 toks/s output
173
- - Output dataset: `davanstrien/deepseek-ocr2-nls-test`
174
-
175
- **Example usage:**
176
- ```bash
177
- hf jobs uv run --flavor l4x1 \
178
- -s HF_TOKEN \
179
- https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr2-vllm.py \
180
- NationalLibraryOfScotland/medical-history-of-british-india output-dataset \
181
- --max-samples 10 --shuffle --seed 42
182
- ```
183
 
184
- **Important notes:**
185
- - Requires vLLM **nightly** (stable 0.15.1 does NOT include DeepSeek-OCR-2 support)
186
- - The nightly index (`https://wheels.vllm.ai/nightly`) occasionally has transient build issues (e.g., only ARM wheels). If this happens, wait and retry.
187
- - Uses same API pattern as v1: `NGramPerReqLogitsProcessor`, `SamplingParams(temperature=0, skip_special_tokens=False)`, `extra_args` for ngram settings
188
 
189
  **Model Information:**
190
  - Model ID: `deepseek-ai/DeepSeek-OCR-2`
191
  - Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
192
  - GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2
193
  - Parameters: 3B
194
- - Architecture: Visual Causal Flow
195
- - Resolution: (0-6)x768x768 + 1x1024x1024 patches
 
 
 
 
 
 
 
 
 
 
 
196
 
197
  ## Other OCR Scripts
198
 
@@ -242,314 +208,6 @@ uv run glm-ocr.py uv-scripts/ocr-smoke-test smoke-out --max-samples 5
242
 
243
  ---
244
 
245
- ## OCR Benchmark Coordinator (`ocr-bench-run.py`)
246
-
247
- **Status:** Working end-to-end (2026-02-14)
248
-
249
- Launches N OCR models on the same dataset via `run_uv_job()`, each pushing to a shared repo as a separate config via `--config/--create-pr`. Eval done separately with `ocr-elo-bench.py`.
250
-
251
- ### Model Registry (4 models)
252
-
253
- | Slug | Model ID | Size | Default GPU | Notes |
254
- |------|----------|------|-------------|-------|
255
- | `glm-ocr` | `zai-org/GLM-OCR` | 0.9B | l4x1 | |
256
- | `deepseek-ocr` | `deepseek-ai/DeepSeek-OCR` | 4B | l4x1 | Auto-passes `--prompt-mode free` (no grounding tags) |
257
- | `lighton-ocr-2` | `lightonai/LightOnOCR-2-1B` | 1B | a100-large | |
258
- | `dots-ocr` | `rednote-hilab/dots.ocr` | 1.7B | l4x1 | Stable vLLM (>=0.9.1) |
259
-
260
- Each model entry has a `default_args` list for model-specific flags (e.g., DeepSeek uses `["--prompt-mode", "free"]`).
261
-
262
- ### Workflow
263
- ```bash
264
- # Launch all 4 models on same data
265
- uv run ocr-bench-run.py source-dataset --output my-bench --max-samples 50
266
-
267
- # Evaluate directly from PRs (no merge needed)
268
- uv run ocr-elo-bench.py my-bench --from-prs --mode both
269
-
270
- # Or merge + evaluate
271
- uv run ocr-elo-bench.py my-bench --from-prs --merge-prs --mode both
272
-
273
- # Other useful flags
274
- uv run ocr-bench-run.py --list-models # Show registry table
275
- uv run ocr-bench-run.py ... --dry-run # Preview without launching
276
- uv run ocr-bench-run.py ... --wait # Poll until complete
277
- uv run ocr-bench-run.py ... --models glm-ocr dots-ocr # Subset of models
278
- ```
279
-
280
- ### Eval script features (`ocr-elo-bench.py`)
281
- - `--from-prs`: Auto-discovers open PRs on the dataset repo, extracts config names from PR title `[config-name]` suffix, loads data from `refs/pr/N` without merging
282
- - `--merge-prs`: Auto-merges discovered PRs via `api.merge_pull_request()` before loading
283
- - `--configs`: Manually specify which configs to load (for merged repos)
284
- - `--mode both`: Runs pairwise ELO + pointwise scoring
285
- - Flat mode (original behavior) still works when `--configs`/`--from-prs` not used
286
-
287
- ### Scripts pushed to Hub
288
- All 4 scripts have been pushed to `uv-scripts/ocr` on the Hub with `--config`/`--create-pr` support:
289
- - `glm-ocr.py` ✅
290
- - `deepseek-ocr-vllm.py` ✅
291
- - `lighton-ocr2.py` ✅
292
- - `dots-ocr.py` ✅
293
-
294
- ### Benchmark Results
295
-
296
- #### Run 1: NLS Medical History (2026-02-14) — Pilot
297
-
298
- **Dataset:** `NationalLibraryOfScotland/medical-history-of-british-india` (10 samples, shuffled, seed 42)
299
- **Output repo:** `davanstrien/ocr-bench-test` (4 open PRs)
300
- **Judge:** `Qwen/Qwen2.5-VL-72B-Instruct` via HF Inference Providers
301
- **Content:** Historical English, degraded scans of medical texts
302
-
303
- **ELO (pairwise, 5 samples evaluated):**
304
- 1. DoTS.ocr — 1540 (67% win rate)
305
- 2. DeepSeek-OCR — 1539 (57%)
306
- 3. LightOnOCR-2 — 1486 (50%)
307
- 4. GLM-OCR — 1436 (29%)
308
-
309
- **Pointwise (5 samples):**
310
- 1. DeepSeek-OCR — 5.0/5.0
311
- 2. GLM-OCR — 4.6
312
- 3. LightOnOCR-2 — 4.4
313
- 4. DoTS.ocr — 4.2
314
-
315
- **Key finding:** DeepSeek-OCR's `--prompt-mode document` produces grounding tags (`<|ref|>`, `<|det|>`) that the judge penalizes heavily. Switching to `--prompt-mode free` (now the default in the registry) made it jump from last place to top 2.
316
-
317
- **Caveat:** 5 samples is far too few for stable rankings. The judge VLM is called once per comparison (pairwise) or once per model-sample (pointwise) via HF Inference Providers API.
318
-
319
- #### Run 2: Rubenstein Manuscript Catalog (2026-02-15) — First Full Benchmark
320
-
321
- **Dataset:** `biglam/rubenstein-manuscript-catalog` (50 samples, shuffled, seed 42)
322
- **Output repo:** `davanstrien/ocr-bench-rubenstein` (4 PRs)
323
- **Judge:** Jury of 2 via `ocr-vllm-judge.py` — `Qwen/Qwen2.5-VL-7B-Instruct` + `Qwen/Qwen3-VL-8B-Instruct` on A100
324
- **Content:** ~48K typewritten + handwritten manuscript catalog cards from Duke University (CC0)
325
-
326
- **ELO (pairwise, 50 samples, 300 comparisons, 0 parse failures):**
327
-
328
- | Rank | Model | ELO | W | L | T | Win% |
329
- |------|-------|-----|---|---|---|------|
330
- | 1 | LightOnOCR-2-1B | 1595 | 100 | 50 | 0 | 67% |
331
- | 2 | DeepSeek-OCR | 1497 | 73 | 77 | 0 | 49% |
332
- | 3 | GLM-OCR | 1471 | 57 | 93 | 0 | 38% |
333
- | 4 | dots.ocr | 1437 | 70 | 80 | 0 | 47% |
334
-
335
- **OCR job times** (all 50 samples each):
336
- - dots-ocr: 5.3 min (L4)
337
- - deepseek-ocr: 5.6 min (L4)
338
- - glm-ocr: 5.7 min (L4)
339
- - lighton-ocr-2: 6.4 min (A100)
340
-
341
- **Key findings:**
342
- - **LightOnOCR-2-1B dominates** on manuscript catalog cards (67% win rate, 100-point ELO gap over 2nd place) — a very different result from the NLS pilot where it placed 3rd
343
- - **Rankings are dataset-dependent**: NLS historical medical texts favored DoTS.ocr and DeepSeek-OCR; Rubenstein typewritten/handwritten cards favor LightOnOCR-2
344
- - **Jury of small models works well**: 0 parse failures on 300 comparisons thanks to vLLM structured output (xgrammar). Majority voting between 2 judges provides robustness
345
- - **50 samples gives meaningful separation**: Clear ELO gaps (1595 → 1497 → 1471 → 1437) unlike the noisy 5-sample pilot
346
- - This validates the multi-dataset benchmark approach — no single dataset tells the whole story
347
-
348
- #### Run 3: UFO-ColPali (2026-02-15) — Cross-Dataset Validation
349
-
350
- **Dataset:** `davanstrien/ufo-ColPali` (50 samples, shuffled, seed 42)
351
- **Output repo:** `davanstrien/ocr-bench-ufo` (4 PRs)
352
- **Judge:** `Qwen/Qwen3-VL-30B-A3B-Instruct` via `ocr-vllm-judge.py` on A100 (updated prompt)
353
- **Content:** Mixed modern documents (invoices, reports, forms, etc.)
354
-
355
- **ELO (pairwise, 50 samples, 294 comparisons):**
356
-
357
- | Rank | Model | ELO | W | L | T | Win% |
358
- |------|-------|-----|---|---|---|------|
359
- | 1 | DeepSeek-OCR | 1827 | 130 | 17 | 0 | 88% |
360
- | 2 | dots.ocr | 1510 | 64 | 83 | 0 | 44% |
361
- | 3 | LightOnOCR-2-1B | 1368 | 77 | 70 | 0 | 52% |
362
- | 4 | GLM-OCR | 1294 | 23 | 124 | 0 | 16% |
363
-
364
- **Human validation (30 comparisons):** DeepSeek-OCR #1 (same as judge), LightOnOCR-2 #3 (same). Middle pack (GLM-OCR #2 human / #4 judge, dots.ocr #4 human / #2 judge) shuffled.
365
-
366
- #### Cross-Dataset Comparison (Human-Validated)
367
-
368
- | Model | Rubenstein Human | Rubenstein Kimi | UFO Human | UFO 30B |
369
- |-------|:---------------:|:---------------:|:---------:|:-------:|
370
- | DeepSeek-OCR | **#1** | **#1** | **#1** | **#1** |
371
- | GLM-OCR | #2 | #3 | #2 | #4 |
372
- | LightOnOCR-2 | #4 | #2 | #3 | #3 |
373
- | dots.ocr | #3 | #4 | #4 | #2 |
374
-
375
- **Conclusion:** DeepSeek-OCR is consistently #1 across datasets and evaluation methods. Middle-pack rankings are dataset-dependent. Updated prompt fixed the LightOnOCR-2 overrating seen with old prompt/small judges.
376
-
377
- *Note: NLS pilot results (5 samples, 72B API judge) omitted — not comparable with newer methodology.*
378
-
379
- ### Known Issues / Next Steps
380
-
381
- 1. ✅ **More samples needed** — Done. Rubenstein run (2026-02-15) used 50 samples and produced clear ELO separation across all 4 models.
382
- 2. ✅ **Smaller judge model** — Tested with Qwen VL 7B + Qwen3 VL 8B via `ocr-vllm-judge.py`. Works well with structured output (0 parse failures). Jury of small models compensates for individual model weakness. See "Offline vLLM Judge" section below.
383
- 3. **Auto-merge in coordinator** — `--wait` could auto-merge PRs after successful jobs. Not yet implemented.
384
- 4. **Adding more models** — `rolm-ocr.py` exists but needs `--config`/`--create-pr` added. `deepseek-ocr2-vllm.py`, `paddleocr-vl-1.5.py`, etc. could also be added to the registry.
385
- 5. **Leaderboard Space** — See future section below.
386
- 6. ✅ **Result persistence** — `ocr-vllm-judge.py` now has `--save-results REPO_ID` flag. First dataset: `davanstrien/ocr-bench-rubenstein-judge`.
387
- 7. **More diverse datasets** — Rankings are dataset-dependent (LightOnOCR-2 wins on Rubenstein, DoTS.ocr won pilot on NLS). Need benchmarks on tables, formulas, multilingual, and modern documents for a complete picture.
388
- 8. ✅ **Human validation** — `ocr-human-eval.py` completed on Rubenstein (30/30). Tested 3 judge configs. **Kimi K2.5 (170B) via Novita + updated prompt = best human agreement** (only judge to match human's #1). Now default in `ocr-jury-bench.py`. See `OCR-BENCHMARK.md` for full comparison.
389
-
390
- ---
391
-
392
- ## Offline vLLM Judge (`ocr-vllm-judge.py`)
393
-
394
- **Status:** Working end-to-end (2026-02-15)
395
-
396
- Runs pairwise OCR quality comparisons using a local VLM judge via vLLM's offline `LLM()` pattern. Supports jury mode (multiple models vote sequentially on the same GPU) with majority voting.
397
-
398
- ### Why use this over the API judge (`ocr-jury-bench.py`)?
399
-
400
- | | API judge (`ocr-jury-bench.py`) | Offline judge (`ocr-vllm-judge.py`) |
401
- |---|---|---|
402
- | Parse failures | Needs retries for malformed JSON | 0 failures — vLLM structured output guarantees valid JSON |
403
- | Network | Rate limits, timeouts, transient errors | Zero network calls |
404
- | Cost | Per-token API pricing | Just GPU time |
405
- | Judge models | Limited to Inference Providers catalog | Any vLLM-supported VLM |
406
- | Jury mode | Sequential API calls per judge | Sequential model loading, batch inference per judge |
407
- | Best for | Quick spot-checks, access to 72B models | Batch evaluation (50+ samples), reproducibility |
408
-
409
- **Pushed to Hub:** `uv-scripts/ocr` as `ocr-vllm-judge.py` (2026-02-15)
410
-
411
- ### Test Results (2026-02-15)
412
-
413
- **Test 1 — Single judge, 1 sample, L4:**
414
- - Qwen2.5-VL-7B-Instruct, 6/6 comparisons, 0 parse failures
415
- - Total time: ~3 min (including model download + warmup)
416
-
417
- **Test 2 — Jury of 2, 3 samples, A100:**
418
- - Qwen2.5-VL-7B + Qwen3-VL-8B, 15/15 comparisons, 0 parse failures
419
- - GPU cleanup between models: successful (nanobind warnings are cosmetic)
420
- - Majority vote aggregation working (`[2/2]` unanimous, `[1/2]` split)
421
- - Total time: ~4 min (including both model downloads)
422
-
423
- **Test 3 — Full benchmark, 50 samples, A100 (Rubenstein Manuscript Catalog):**
424
- - Qwen2.5-VL-7B + Qwen3-VL-8B jury, 300/300 comparisons, 0 parse failures
425
- - Input: `davanstrien/ocr-bench-rubenstein` (4 PRs from `ocr-bench-run.py`)
426
- - Produced clear ELO rankings with meaningful separation
427
- - See "Benchmark Results → Run 2" in the OCR Benchmark Coordinator section above
428
-
429
- ### Usage
430
-
431
- ```bash
432
- # Single judge on L4
433
- hf jobs uv run --flavor l4x1 -s HF_TOKEN \
434
- ocr-vllm-judge.py davanstrien/ocr-bench-nls-50 --from-prs \
435
- --judge-model Qwen/Qwen2.5-VL-7B-Instruct --max-samples 10
436
-
437
- # Jury of 2 on A100 (recommended for jury mode)
438
- hf jobs uv run --flavor a100-large -s HF_TOKEN \
439
- ocr-vllm-judge.py davanstrien/ocr-bench-nls-50 --from-prs \
440
- --judge-model Qwen/Qwen2.5-VL-7B-Instruct \
441
- --judge-model Qwen/Qwen3-VL-8B-Instruct \
442
- --max-samples 50
443
- ```
444
-
445
- ### Implementation Notes
446
- - Comparisons built upfront on CPU as `NamedTuple`s, then batched to vLLM in single `llm.chat()` call
447
- - Structured output via compatibility shim: `StructuredOutputsParams` (vLLM >= 0.12) → `GuidedDecodingParams` (older) → prompt-based fallback
448
- - GPU cleanup between jury models: `destroy_model_parallel()` + `gc.collect()` + `torch.cuda.empty_cache()`
449
- - Position bias mitigation: A/B order randomized per comparison
450
- - A100 recommended for jury mode; L4 works for single 7B judge
451
-
452
- ### Next Steps
453
- 1. ✅ **Scale test** — Completed on Rubenstein Manuscript Catalog (50 samples, 300 comparisons, 0 parse failures). Rankings differ from API-based pilot (different dataset + judge), validating multi-dataset approach.
454
- 2. ✅ **Result persistence** — Added `--save-results REPO_ID` flag. Pushes 3 configs to HF Hub: `comparisons` (one row per pairwise comparison), `leaderboard` (ELO + win/loss/tie per model), `metadata` (source dataset, judge models, seed, timestamp). First dataset: `davanstrien/ocr-bench-rubenstein-judge`.
455
- 3. **Integrate into `ocr-bench-run.py`** — Add `--eval` flag that auto-runs vLLM judge after OCR jobs complete
456
-
457
- ---
458
-
459
- ## Blind Human Eval (`ocr-human-eval.py`)
460
-
461
- **Status:** Working (2026-02-15)
462
-
463
- Gradio app for blind A/B comparison of OCR outputs. Shows document image + two anonymized OCR outputs, human picks winner or tie. Computes ELO rankings from human annotations and optionally compares against automated judge results.
464
-
465
- ### Usage
466
-
467
- ```bash
468
- # Basic — blind human eval only
469
- uv run ocr-human-eval.py davanstrien/ocr-bench-rubenstein --from-prs --max-samples 5
470
-
471
- # With judge comparison — loads automated judge results for agreement analysis
472
- uv run ocr-human-eval.py davanstrien/ocr-bench-rubenstein --from-prs \
473
- --judge-results davanstrien/ocr-bench-rubenstein-judge --max-samples 5
474
- ```
475
-
476
- ### Features
477
- - **Blind evaluation**: Two-tab design — Evaluate tab never shows model names, Results tab reveals rankings
478
- - **Position bias mitigation**: A/B order randomly swapped per comparison
479
- - **Resume support**: JSON annotations saved atomically after each vote; restart app to resume where you left off
480
- - **Live agreement tracking**: Per-vote feedback shows running agreement with automated judge (when `--judge-results` provided)
481
- - **Split-jury prioritization**: Comparisons where automated judges disagreed ("1/2" agreement) shown first — highest annotation value per vote
482
- - **Image variety**: Round-robin interleaving by sample so you don't see the same document image repeatedly
483
- - **Soft/hard disagreement analysis**: Distinguishes between harmless ties-vs-winner disagreements and genuine opposite-winner errors
484
-
485
- ### First Validation Results (Rubenstein, 30 annotations)
486
-
487
- Tested 3 judge configs against 30 human annotations. **Kimi K2.5 (170B) via Novita** is the only judge to match human's #1 pick (DeepSeek-OCR). Small models (7B/8B/30B) all overrate LightOnOCR-2 due to bias toward its commentary style. Updated prompt (prioritized faithfulness > completeness > accuracy) helps but model size is the bigger factor.
488
-
489
- Full results and analysis in `OCR-BENCHMARK.md` → "Human Validation" section.
490
-
491
- ### Next Steps
492
- 1. **Second dataset** — Run on NLS Medical History for cross-dataset human validation
493
- 2. **Multiple annotators** — Currently single-user; could support annotator ID for inter-annotator agreement
494
- 3. **Remaining LightOnOCR-2 gap** — Still #2 (Kimi) vs #4 (human). May need to investigate on more samples or strip commentary in preprocessing
495
-
496
- ---
497
-
498
- ## Future: Leaderboard HF Space
499
-
500
- **Status:** Idea (noted 2026-02-14)
501
-
502
- Build a Hugging Face Space with a persistent leaderboard that gets updated after each benchmark run. This would give a public-facing view of OCR model quality.
503
-
504
- **Design ideas:**
505
- - Gradio or static Space displaying ELO ratings + pointwise scores
506
- - `ocr-elo-bench.py` could push results to a dataset that the Space reads
507
- - Or the Space itself could run evaluation on demand
508
- - Show per-document comparisons (image + side-by-side OCR outputs)
509
- - Historical tracking — how scores change across model versions
510
- - Filter by document type (historical, modern, tables, formulas, multilingual)
511
-
512
- **Open questions:**
513
- - Should the eval script push structured results to a dataset (e.g., `uv-scripts/ocr-leaderboard-data`)?
514
- - Static leaderboard (updated by CI/scheduled job) vs interactive (evaluate on demand)?
515
- - Include sample outputs for qualitative comparison?
516
- - How to handle different eval datasets (NLS medical history vs UFO vs others)?
517
-
518
- ---
519
-
520
- ## Incremental Uploads / Checkpoint Strategy — ON HOLD
521
-
522
- **Status:** Waiting on HF Hub Buckets (noted 2026-02-20)
523
-
524
- **Current state:**
525
- - `glm-ocr.py` (v1): Simple batch-then-push. Works fine for most jobs.
526
- - `glm-ocr-v2.py`: Adds CommitScheduler-based incremental uploads + checkpoint/resume. ~400 extra lines. Works but has tradeoffs (commit noise, `--create-pr` incompatible, complex resume metadata).
527
-
528
- **Decision: Do NOT port v2 pattern to other scripts.** Wait for HF Hub Buckets instead.
529
-
530
- **Why:** Two open PRs will likely make the v2 CommitScheduler approach obsolete:
531
- - [huggingface_hub#3673](https://github.com/huggingface/huggingface_hub/pull/3673) — Buckets API: S3-like mutable object storage on HF, no git versioning overhead
532
- - [huggingface_hub#3807](https://github.com/huggingface/huggingface_hub/pull/3807) — HfFileSystem support for buckets: fsspec-compatible, so pyarrow/pandas/datasets can read/write `hf://buckets/` paths directly
533
-
534
- **What Buckets would replace:** Once landed, incremental saves become one line per batch:
535
- ```python
536
- batch_ds.to_parquet(f"hf://buckets/{user}/ocr-scratch/shard-{batch_num:05d}.parquet")
537
- ```
538
- No CommitScheduler, no CleanupScheduler, no resume metadata, no completed batch scanning. Just write to the bucket path via fsspec. Final step: read back from bucket, `push_to_hub` to a clean dataset repo (compatible with `--create-pr`).
539
-
540
- **Action items when Buckets ships:**
541
- 1. Test `hf://buckets/` fsspec writes on one script (glm-ocr is the guinea pig)
542
- 2. Verify: write performance, atomicity (partial writes visible?), auth propagation in HF Jobs
543
- 3. If it works, adopt as the standard pattern for all scripts — simple enough to inline (~20 lines)
544
- 4. Retire `glm-ocr-v2.py` CommitScheduler approach
545
-
546
- **Until then:** v1 scripts stay as-is. `glm-ocr-v2.py` exists if someone needs resume on a very large job today.
547
-
548
- ---
549
-
550
- **Last Updated:** 2026-02-20
551
  **Watch PRs:**
552
- - **HF Hub Buckets API** ([#3673](https://github.com/huggingface/huggingface_hub/pull/3673)): Core buckets support. Will enable simpler incremental upload pattern for all scripts.
553
- - **HfFileSystem Buckets** ([#3807](https://github.com/huggingface/huggingface_hub/pull/3807)): fsspec support for `hf://buckets/` paths. Key for zero-boilerplate writes from scripts.
554
- - DeepSeek-OCR-2 stable vLLM release: Currently only in nightly. Watch for vLLM 0.16.0 stable release on PyPI to remove nightly dependency.
555
- - nanobind leak warnings in vLLM structured output (xgrammar): Cosmetic only, does not affect results. May be fixed in future xgrammar release.
 
3
  ## Active Scripts
4
 
5
  ### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`)
6
+ ✅ **Production Ready**
7
+ - Fully supported by vLLM
8
+ - Fast batch processing
9
+ - Tested and working on HF Jobs
 
 
 
 
 
 
 
10
 
11
  ### LightOnOCR-2-1B (`lighton-ocr2.py`)
12
  ✅ **Production Ready** (Fixed 2026-01-29)
 
75
  - Backend: Transformers (single image processing)
76
  - Requires: `transformers>=5.0.0`
77
 
78
+ ## Pending Development
 
 
 
 
 
 
 
79
 
80
+ ### DeepSeek-OCR-2 (Visual Causal Flow Architecture)
81
+
82
+ **Status:** Waiting for vLLM upstream support
83
+
84
+ **Context:**
85
+ DeepSeek-OCR-2 is the next generation OCR model (3B parameters) with Visual Causal Flow architecture offering improved quality. We attempted to create a UV script (`deepseek-ocr2-vllm.py`) but encountered a blocker.
86
+
87
+ **Blocker:**
88
+ vLLM does not yet support `DeepseekOCR2ForCausalLM` architecture in the official release.
89
+
90
+ **PR to Watch:**
91
+ 🔗 https://github.com/vllm-project/vllm/pull/33165
92
+
93
+ This PR adds DeepSeek-OCR-2 support but is currently:
94
+ - ⚠️ **Open** (not merged)
95
+ - Has unresolved review comments
96
+ - Pre-commit checks failing
97
+ - Issues: hardcoded parameters, device mismatch bugs, missing error handling
98
+
99
+ **What's Needed:**
100
+ 1. PR #33165 needs to be reviewed, fixed, and merged
101
+ 2. vLLM needs to release a version including the merge
102
+ 3. Then we can add these dependencies to our script:
103
+ ```python
104
+ # dependencies = [
105
+ # "datasets>=4.0.0",
106
+ # "huggingface-hub",
107
+ # "pillow",
108
+ # "vllm",
109
+ # "tqdm",
110
+ # "toolz",
111
+ # "torch",
112
+ # "addict",
113
+ # "matplotlib",
114
+ # ]
115
+ ```
116
+
117
+ **Implementation Progress:**
118
+ - ✅ Created `deepseek-ocr2-vllm.py` script
119
+ - ✅ Fixed dependency issues (pyarrow, datasets>=4.0.0)
120
+ - ✅ Tested script structure on HF Jobs
121
+ - ❌ Blocked: vLLM doesn't recognize architecture
122
+
123
+ **Partial Implementation:**
124
+ The file `deepseek-ocr2-vllm.py` exists in this repo but is **not functional** until vLLM support lands. Consider it a draft.
125
+
126
+ **Testing Evidence:**
127
+ When we ran on HF Jobs, we got:
128
  ```
129
+ ValidationError: Model architectures ['DeepseekOCR2ForCausalLM'] are not supported for now.
130
+ Supported architectures: [...'DeepseekOCRForCausalLM'...]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
  ```
132
 
133
+ **Next Steps (when PR merges):**
134
+ 1. Update `deepseek-ocr2-vllm.py` dependencies to include `addict` and `matplotlib`
135
+ 2. Test on HF Jobs with small dataset (10 samples)
136
+ 3. Verify output quality
137
+ 4. Update README.md with DeepSeek-OCR-2 section
138
+ 5. Document v1 vs v2 differences
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
+ **Alternative Approaches (if urgent):**
141
+ - Create transformers-based script (slower, no vLLM batching)
142
+ - Use DeepSeek's official repo setup (complex, not UV-script compatible)
 
143
 
144
  **Model Information:**
145
  - Model ID: `deepseek-ai/DeepSeek-OCR-2`
146
  - Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
147
  - GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2
148
  - Parameters: 3B
149
+ - Resolution: (0-6)×768×768 + 1×1024×1024 patches
150
+ - Key improvement: Visual Causal Flow architecture
151
+
152
+ **Resolution Modes (for v2):**
153
+ ```python
154
+ RESOLUTION_MODES = {
155
+ "tiny": {"base_size": 512, "image_size": 512, "crop_mode": False},
156
+ "small": {"base_size": 640, "image_size": 640, "crop_mode": False},
157
+ "base": {"base_size": 1024, "image_size": 768, "crop_mode": False}, # v2 optimized
158
+ "large": {"base_size": 1280, "image_size": 1024, "crop_mode": False},
159
+ "gundam": {"base_size": 1024, "image_size": 768, "crop_mode": True}, # v2 optimized
160
+ }
161
+ ```
162
 
163
  ## Other OCR Scripts
164
 
 
208
 
209
  ---
210
 
211
+ **Last Updated:** 2026-02-12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
212
  **Watch PRs:**
213
+ - DeepSeek-OCR-2: https://github.com/vllm-project/vllm/pull/33165
 
 
 
README.md CHANGED
@@ -7,7 +7,7 @@ tags: [uv-script, ocr, vision-language-model, document-processing, hf-jobs]
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
- 19 OCR scripts covering models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
@@ -43,12 +43,11 @@ That's it! The script will:
43
  | `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
44
  | `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
45
  | `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
46
- | `dots-mocr.py` | [dots.mocr](https://huggingface.co/rednote-hilab/dots.mocr) | 3B | vLLM | 8 prompt modes incl. SVG generation, layout + bbox, 100+ languages |
47
  | `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
48
  | `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
49
  | `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
50
  | `deepseek-ocr2-vllm.py` | [DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | 3B | vLLM | Newer, requires nightly vLLM |
51
- | `qianfan-ocr.py` | [Qianfan-OCR](https://huggingface.co/baidu/Qianfan-OCR) | 4.7B | vLLM | #1 OmniDocBench v1.5 (93.12), Layout-as-Thought, 192 languages |
52
  | `olmocr2-vllm.py` | [olmOCR-2-7B](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) | 7B | vLLM | 82.4% olmOCR-Bench |
53
  | `rolm-ocr.py` | [RolmOCR](https://huggingface.co/reducto/RolmOCR) | 7B | vLLM | Qwen2.5-VL based, general-purpose |
54
  | `numarkdown-ocr.py` | [NuMarkdown-8B](https://huggingface.co/numind/NuMarkdown-8B-Thinking) | 8B | vLLM | Reasoning-based OCR |
@@ -392,46 +391,7 @@ Advanced reasoning-based OCR using [numind/NuMarkdown-8B-Thinking](https://huggi
392
  - 🔍 **Multi-column Layouts** - Handles complex document structures
393
  - ✨ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`
394
 
395
- ### dots.mocr (`dots-mocr.py`) — SVG generation + SOTA OCR
396
-
397
- Advanced multilingual OCR and SVG generation using [rednote-hilab/dots.mocr](https://huggingface.co/rednote-hilab/dots.mocr) with 3B parameters:
398
-
399
- - 🌍 **100+ Languages** - Extensive multilingual support
400
- - 📝 **Document OCR** - Clean text extraction (default mode)
401
- - 📊 **Layout Analysis** - Structured output with bboxes and categories
402
- - 📐 **Formula recognition** - LaTeX format support
403
- - 🖼️ **SVG generation** - Convert charts, UI layouts, figures to editable SVG code
404
- - 🔀 **8 prompt modes** - OCR, layout-all, layout-only, web-parsing, scene-spotting, grounding-ocr, svg, general
405
- - 📄 **[Paper](https://arxiv.org/abs/2603.13032)** - 83.9% on olmOCR-Bench
406
-
407
- **SVG variant:** Use `--model rednote-hilab/dots.mocr-svg` with `--prompt-mode svg` for best SVG results.
408
-
409
- **Quick start:**
410
-
411
- ```bash
412
- # Basic OCR
413
- hf jobs uv run --flavor l4x1 \
414
- -s HF_TOKEN \
415
- https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
416
- your-input-dataset your-output-dataset \
417
- --max-samples 100
418
-
419
- # SVG generation from charts/figures
420
- hf jobs uv run --flavor l4x1 \
421
- -s HF_TOKEN \
422
- https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
423
- your-charts svg-output \
424
- --prompt-mode svg --model rednote-hilab/dots.mocr-svg
425
-
426
- # Layout analysis with bounding boxes
427
- hf jobs uv run --flavor l4x1 \
428
- -s HF_TOKEN \
429
- https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
430
- your-documents layout-output \
431
- --prompt-mode layout-all
432
- ```
433
-
434
- ### DoTS.ocr v1 (`dots-ocr.py`)
435
 
436
  Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:
437
 
@@ -462,55 +422,6 @@ hf jobs uv run --flavor l4x1 \
462
  --max-samples 100
463
  ```
464
 
465
- ### Qianfan-OCR (`qianfan-ocr.py`) — #1 on OmniDocBench v1.5
466
-
467
- End-to-end document intelligence using [baidu/Qianfan-OCR](https://huggingface.co/baidu/Qianfan-OCR) with 4.7B parameters:
468
-
469
- - **93.12 on OmniDocBench v1.5** — #1 end-to-end model
470
- - **79.8 on OlmOCR Bench** — #1 end-to-end model
471
- - 🧠 **Layout-as-Thought** — Optional reasoning phase for complex layouts (`--think`)
472
- - 🌍 **192 languages** — Latin, CJK, Arabic, Cyrillic, and more
473
- - 📝 **OCR mode** — Document parsing to markdown (default)
474
- - 📊 **Table mode** — HTML table extraction
475
- - 📐 **Formula mode** — LaTeX recognition
476
- - 📈 **Chart mode** — Chart understanding and analysis
477
- - 🔍 **Scene mode** — Scene text extraction
478
- - 🔑 **KIE mode** — Key information extraction with custom prompts
479
-
480
- **Prompt Modes:**
481
-
482
- - `ocr`: Document parsing to markdown (default)
483
- - `table`: Table extraction to HTML
484
- - `formula`: Formula recognition to LaTeX
485
- - `chart`: Chart understanding
486
- - `scene`: Scene text extraction
487
- - `kie`: Key information extraction (requires `--custom-prompt`)
488
-
489
- **Quick start:**
490
-
491
- ```bash
492
- # Basic OCR
493
- hf jobs uv run --flavor l4x1 \
494
- -s HF_TOKEN \
495
- https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
496
- your-input-dataset your-output-dataset \
497
- --max-samples 100
498
-
499
- # Layout-as-Thought for complex documents
500
- hf jobs uv run --flavor l4x1 \
501
- -s HF_TOKEN \
502
- https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
503
- your-input-dataset your-output-dataset \
504
- --think --max-samples 50
505
-
506
- # Key information extraction
507
- hf jobs uv run --flavor l4x1 \
508
- -s HF_TOKEN \
509
- https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
510
- invoices extracted-fields \
511
- --prompt-mode kie --custom-prompt "Extract: name, date, total. Output as JSON."
512
- ```
513
-
514
  ### olmOCR2 (`olmocr2-vllm.py`)
515
 
516
  High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
 
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
+ 14 OCR models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
 
43
  | `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
44
  | `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
45
  | `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
46
+ | `dots-ocr-1.5.py` | [DoTS.ocr-1.5](https://huggingface.co/Tencent/DoTS.ocr-1.5) | 3B | vLLM | Updated multilingual model |
47
  | `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
48
  | `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
49
  | `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
50
  | `deepseek-ocr2-vllm.py` | [DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | 3B | vLLM | Newer, requires nightly vLLM |
 
51
  | `olmocr2-vllm.py` | [olmOCR-2-7B](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) | 7B | vLLM | 82.4% olmOCR-Bench |
52
  | `rolm-ocr.py` | [RolmOCR](https://huggingface.co/reducto/RolmOCR) | 7B | vLLM | Qwen2.5-VL based, general-purpose |
53
  | `numarkdown-ocr.py` | [NuMarkdown-8B](https://huggingface.co/numind/NuMarkdown-8B-Thinking) | 8B | vLLM | Reasoning-based OCR |
 
391
  - 🔍 **Multi-column Layouts** - Handles complex document structures
392
  - ✨ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`
393
 
394
+ ### DoTS.ocr (`dots-ocr.py`)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
395
 
396
  Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:
397
 
 
422
  --max-samples 100
423
  ```
424
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
425
  ### olmOCR2 (`olmocr2-vllm.py`)
426
 
427
  High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
dots-mocr.py → dots-ocr-1.5.py RENAMED
@@ -13,27 +13,23 @@
13
  # ///
14
 
15
  """
16
- Convert document images to markdown using dots.mocr with vLLM.
17
 
18
- dots.mocr is a 3B multilingual document parsing model with SOTA performance
19
- on 100+ languages. It excels at converting structured graphics (charts, UI
20
- layouts, scientific figures) directly into SVG code. Core capabilities include
21
- grounding, recognition, semantic understanding, and interactive dialogue.
22
 
23
  Features:
24
  - Multilingual support (100+ languages)
25
  - Table extraction and formatting
26
  - Formula recognition
27
  - Layout-aware text extraction
28
- - Web screen parsing
29
- - Scene text spotting
30
- - SVG code generation (use --prompt-mode svg, or --model rednote-hilab/dots.mocr-svg for best results)
31
-
32
- Model: rednote-hilab/dots.mocr
33
- SVG variant: rednote-hilab/dots.mocr-svg
34
- vLLM: Officially integrated since v0.11.0
35
- GitHub: https://github.com/rednote-hilab/dots.mocr
36
- Paper: https://arxiv.org/abs/2603.13032
37
  """
38
 
39
  import argparse
@@ -60,8 +56,8 @@ logger = logging.getLogger(__name__)
60
 
61
 
62
  # ────────────────────────────────────────────────────────────────
63
- # dots.mocr Prompt Templates
64
- # Source: https://github.com/rednote-hilab/dots.mocr/blob/master/dots_mocr/utils/prompts.py
65
  # ────────────────────────────────────────────────────────────────
66
 
67
  PROMPT_TEMPLATES = {
@@ -84,19 +80,11 @@ PROMPT_TEMPLATES = {
84
 
85
  5. Final Output: The entire output must be a single JSON object.
86
  """,
87
- # NOTE: Bboxes from layout-all/layout-only are in the resized image coordinate
88
- # space (Qwen2VLImageProcessor smart_resize: max_pixels=11289600, factor=28),
89
- # NOT original image coordinates. To map back, compute:
90
- # resized_h, resized_w = smart_resize(orig_h, orig_w)
91
- # scale_x, scale_y = orig_w / resized_w, orig_h / resized_h
92
  "layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
 
93
  "web-parsing": """Parsing the layout info of this webpage image with format json:\n""",
94
  "scene-spotting": """Detect and recognize the text in the image.""",
95
  "grounding-ocr": """Extract text from the given bounding box on the image (format: [x1, y1, x2, y2]).\nBounding Box:\n""",
96
- # SVG code generation — {width} and {height} are replaced with actual image dimensions.
97
- # For best results, use --model rednote-hilab/dots.mocr-svg
98
- # Uses higher temperature (0.9) and top_p (1.0) per official recommendation.
99
- "svg": """Please generate the SVG code based on the image. viewBox="0 0 {width} {height}" """,
100
  "general": """ """,
101
  }
102
 
@@ -129,12 +117,6 @@ def make_ocr_message(
129
  # Convert to RGB
130
  pil_img = pil_img.convert("RGB")
131
 
132
- # For SVG mode, inject actual image dimensions into the prompt
133
- if "{width}" in prompt and "{height}" in prompt:
134
- prompt = prompt.replace("{width}", str(pil_img.width)).replace(
135
- "{height}", str(pil_img.height)
136
- )
137
-
138
  # Convert to base64 data URI
139
  buf = io.BytesIO()
140
  pil_img.save(buf, format="PNG")
@@ -172,7 +154,7 @@ def create_dataset_card(
172
  tags:
173
  - ocr
174
  - document-processing
175
- - dots-mocr
176
  - multilingual
177
  - markdown
178
  - uv-script
@@ -181,7 +163,7 @@ tags:
181
 
182
  # Document OCR using {model_name}
183
 
184
- This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using dots.mocr, a 3B multilingual model with SOTA document parsing and SVG generation.
185
 
186
  ## Processing Details
187
 
@@ -204,14 +186,13 @@ This dataset contains OCR results from images in [{source_dataset}](https://hugg
204
 
205
  ## Model Information
206
 
207
- dots.mocr is a 3B multilingual document parsing model that excels at:
208
  - 100+ Languages — Multilingual document support
209
  - Table extraction — Structured data recognition
210
  - Formulas — Mathematical notation preservation
211
  - Layout-aware — Reading order and structure preservation
212
  - Web screen parsing — Webpage layout analysis
213
  - Scene text spotting — Text detection in natural scenes
214
- - SVG code generation — Charts, UI layouts, scientific figures to SVG
215
 
216
  ## Dataset Structure
217
 
@@ -241,10 +222,10 @@ for info in inference_info:
241
 
242
  ## Reproduction
243
 
244
- This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) dots.mocr script:
245
 
246
  ```bash
247
- uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \\
248
  {source_dataset} \\
249
  <output-dataset> \\
250
  --image-column {image_column} \\
@@ -264,7 +245,7 @@ def main(
264
  output_dataset: str,
265
  image_column: str = "image",
266
  batch_size: int = 16,
267
- model: str = "rednote-hilab/dots.mocr",
268
  max_model_len: int = 24000,
269
  max_tokens: int = 24000,
270
  gpu_memory_utilization: float = 0.9,
@@ -283,7 +264,7 @@ def main(
283
  top_p: float = 0.9,
284
  verbose: bool = False,
285
  ):
286
- """Process images from HF dataset through dots.mocr model."""
287
 
288
  # Check CUDA availability first
289
  check_cuda_availability()
@@ -334,12 +315,6 @@ def main(
334
  gpu_memory_utilization=gpu_memory_utilization,
335
  )
336
 
337
- # SVG mode uses higher temperature/top_p per official recommendation
338
- if prompt_mode == "svg" and temperature == 0.1 and top_p == 0.9:
339
- logger.info("SVG mode: using recommended temperature=0.9, top_p=1.0")
340
- temperature = 0.9
341
- top_p = 1.0
342
-
343
  sampling_params = SamplingParams(
344
  temperature=temperature,
345
  top_p=top_p,
@@ -355,7 +330,7 @@ def main(
355
  for batch_indices in tqdm(
356
  partition_all(batch_size, range(len(dataset))),
357
  total=(len(dataset) + batch_size - 1) // batch_size,
358
- desc="dots.mocr processing",
359
  ):
360
  batch_indices = list(batch_indices)
361
  batch_images = [dataset[i][image_column] for i in batch_indices]
@@ -364,12 +339,8 @@ def main(
364
  # Create messages for batch
365
  batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
366
 
367
- # Process with vLLM (dots.mocr needs "string" content format)
368
- outputs = llm.chat(
369
- batch_messages,
370
- sampling_params,
371
- chat_template_content_format="string",
372
- )
373
 
374
  # Extract outputs
375
  for output in outputs:
@@ -392,7 +363,7 @@ def main(
392
  # Handle inference_info tracking (for multi-model comparisons)
393
  inference_entry = {
394
  "model_id": model,
395
- "model_name": "dots.mocr",
396
  "column_name": output_column,
397
  "timestamp": datetime.now().isoformat(),
398
  "prompt_mode": prompt_mode if not custom_prompt else "custom",
@@ -473,7 +444,7 @@ def main(
473
  card = DatasetCard(card_content)
474
  card.push_to_hub(output_dataset, token=HF_TOKEN)
475
 
476
- logger.info("dots.mocr processing complete!")
477
  logger.info(
478
  f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
479
  )
@@ -495,83 +466,77 @@ if __name__ == "__main__":
495
  # Show example usage if no arguments
496
  if len(sys.argv) == 1:
497
  print("=" * 80)
498
- print("dots.mocr Document Processing")
499
  print("=" * 80)
500
- print("\n3B multilingual OCR model with SVG generation")
501
  print("\nFeatures:")
502
  print("- Multilingual support (100+ languages)")
503
  print("- Fast processing with vLLM")
504
  print("- Table extraction and formatting")
505
  print("- Formula recognition")
506
  print("- Layout-aware text extraction")
507
- print("- Web screen parsing")
508
- print("- Scene text spotting")
509
- print("- SVG code generation (charts, UI, figures)")
510
  print("\nPrompt modes:")
511
- print(" ocr - Text extraction (default)")
512
- print(" layout-all - Layout + bboxes + text (JSON)")
513
- print(" layout-only - Layout + bboxes only (JSON)")
514
- print(" web-parsing - Webpage layout analysis (JSON)")
515
  print(" scene-spotting - Scene text detection")
516
- print(" grounding-ocr - Text from bounding box region")
517
- print(" svg - SVG code generation")
518
- print(" general - Free-form (use with --custom-prompt)")
519
  print("\nExample usage:")
520
  print("\n1. Basic OCR:")
521
- print(" uv run dots-mocr.py input-dataset output-dataset")
522
- print("\n2. SVG generation:")
523
- print(
524
- " uv run dots-mocr.py charts svg-output --prompt-mode svg --model rednote-hilab/dots.mocr-svg"
525
- )
526
- print("\n3. Web screen parsing:")
527
- print(" uv run dots-mocr.py screenshots parsed --prompt-mode web-parsing")
528
  print("\n4. Layout analysis with structure:")
529
- print(" uv run dots-mocr.py papers analyzed --prompt-mode layout-all")
530
  print("\n5. Running on HF Jobs:")
531
  print(" hf jobs uv run --flavor l4x1 \\")
532
  print(" -s HF_TOKEN \\")
533
  print(
534
- " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \\"
535
  )
536
  print(" input-dataset output-dataset")
537
  print("\n" + "=" * 80)
538
- print("\nFor full help, run: uv run dots-mocr.py --help")
539
  sys.exit(0)
540
 
541
  parser = argparse.ArgumentParser(
542
- description="Document OCR using dots.mocr (3B multilingual model with SVG generation)",
543
  formatter_class=argparse.RawDescriptionHelpFormatter,
544
  epilog="""
545
- Prompt Modes (official dots.mocr prompts):
546
  ocr - Simple text extraction (default)
547
  layout-all - Layout analysis with bboxes, categories, and text (JSON output)
548
  layout-only - Layout detection with bboxes and categories only (JSON output)
549
- web-parsing - Webpage layout analysis (JSON output)
550
- scene-spotting - Scene text detection and recognition
551
- grounding-ocr - Extract text from bounding box region
552
- svg - SVG code generation (auto-injects image dimensions into viewBox)
553
- general - Free-form QA (use with --custom-prompt)
554
 
555
  SVG Code Generation:
556
- Use --prompt-mode svg for SVG output. For best results, combine with
557
- --model rednote-hilab/dots.mocr-svg (the SVG-optimized variant).
558
- SVG mode automatically uses temperature=0.9, top_p=1.0 unless overridden.
559
 
560
  Examples:
561
  # Basic text OCR (default)
562
- uv run dots-mocr.py my-docs analyzed-docs
563
-
564
- # SVG generation with optimized variant
565
- uv run dots-mocr.py charts svg-out --prompt-mode svg --model rednote-hilab/dots.mocr-svg
566
 
567
  # Web screen parsing
568
- uv run dots-mocr.py screenshots parsed --prompt-mode web-parsing
 
 
 
569
 
570
  # Full layout analysis with structure
571
- uv run dots-mocr.py papers structured --prompt-mode layout-all
572
 
573
  # Random sampling for testing
574
- uv run dots-mocr.py large-dataset test --max-samples 50 --shuffle
575
  """,
576
  )
577
 
@@ -590,8 +555,8 @@ Examples:
590
  )
591
  parser.add_argument(
592
  "--model",
593
- default="rednote-hilab/dots.mocr",
594
- help="Model to use (default: rednote-hilab/dots.mocr, or rednote-hilab/dots.mocr-svg for SVG)",
595
  )
596
  parser.add_argument(
597
  "--max-model-len",
 
13
  # ///
14
 
15
  """
16
+ Convert document images to markdown using DoTS.ocr-1.5 with vLLM.
17
 
18
+ DoTS.ocr-1.5 is a 3B multilingual document parsing model with SOTA performance
19
+ on 100+ languages. Compared to v1 (1.7B), it adds web screen parsing, scene text
20
+ spotting, SVG code generation, and stronger multilingual document parsing.
 
21
 
22
  Features:
23
  - Multilingual support (100+ languages)
24
  - Table extraction and formatting
25
  - Formula recognition
26
  - Layout-aware text extraction
27
+ - Web screen parsing (NEW in v1.5)
28
+ - Scene text spotting (NEW in v1.5)
29
+ - SVG code generation (requires dots.ocr-1.5-svg variant)
30
+
31
+ Model: rednote-hilab/dots.ocr-1.5
32
+ vLLM: Officially supported (same DotsOCRForCausalLM architecture as v1)
 
 
 
33
  """
34
 
35
  import argparse
 
56
 
57
 
58
  # ────────────────────────────────────────────────────────────────
59
+ # DoTS OCR 1.5 Prompt Templates (from official dots.ocr repo)
60
+ # Source: https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py
61
  # ────────────────────────────────────────────────────────────────
62
 
63
  PROMPT_TEMPLATES = {
 
80
 
81
  5. Final Output: The entire output must be a single JSON object.
82
  """,
 
 
 
 
 
83
  "layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
84
+ # NEW in v1.5:
85
  "web-parsing": """Parsing the layout info of this webpage image with format json:\n""",
86
  "scene-spotting": """Detect and recognize the text in the image.""",
87
  "grounding-ocr": """Extract text from the given bounding box on the image (format: [x1, y1, x2, y2]).\nBounding Box:\n""",
 
 
 
 
88
  "general": """ """,
89
  }
90
 
 
117
  # Convert to RGB
118
  pil_img = pil_img.convert("RGB")
119
 
 
 
 
 
 
 
120
  # Convert to base64 data URI
121
  buf = io.BytesIO()
122
  pil_img.save(buf, format="PNG")
 
154
  tags:
155
  - ocr
156
  - document-processing
157
+ - dots-ocr-1.5
158
  - multilingual
159
  - markdown
160
  - uv-script
 
163
 
164
  # Document OCR using {model_name}
165
 
166
+ This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using DoTS.ocr-1.5, a 3B multilingual model with SOTA document parsing.
167
 
168
  ## Processing Details
169
 
 
186
 
187
  ## Model Information
188
 
189
+ DoTS.ocr-1.5 is a 3B multilingual document parsing model that excels at:
190
  - 100+ Languages — Multilingual document support
191
  - Table extraction — Structured data recognition
192
  - Formulas — Mathematical notation preservation
193
  - Layout-aware — Reading order and structure preservation
194
  - Web screen parsing — Webpage layout analysis
195
  - Scene text spotting — Text detection in natural scenes
 
196
 
197
  ## Dataset Structure
198
 
 
222
 
223
  ## Reproduction
224
 
225
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DoTS OCR 1.5 script:
226
 
227
  ```bash
228
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr-1.5.py \\
229
  {source_dataset} \\
230
  <output-dataset> \\
231
  --image-column {image_column} \\
 
245
  output_dataset: str,
246
  image_column: str = "image",
247
  batch_size: int = 16,
248
+ model: str = "rednote-hilab/dots.ocr-1.5",
249
  max_model_len: int = 24000,
250
  max_tokens: int = 24000,
251
  gpu_memory_utilization: float = 0.9,
 
264
  top_p: float = 0.9,
265
  verbose: bool = False,
266
  ):
267
+ """Process images from HF dataset through DoTS.ocr-1.5 model."""
268
 
269
  # Check CUDA availability first
270
  check_cuda_availability()
 
315
  gpu_memory_utilization=gpu_memory_utilization,
316
  )
317
 
 
 
 
 
 
 
318
  sampling_params = SamplingParams(
319
  temperature=temperature,
320
  top_p=top_p,
 
330
  for batch_indices in tqdm(
331
  partition_all(batch_size, range(len(dataset))),
332
  total=(len(dataset) + batch_size - 1) // batch_size,
333
+ desc="DoTS.ocr-1.5 processing",
334
  ):
335
  batch_indices = list(batch_indices)
336
  batch_images = [dataset[i][image_column] for i in batch_indices]
 
339
  # Create messages for batch
340
  batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
341
 
342
+ # Process with vLLM
343
+ outputs = llm.chat(batch_messages, sampling_params)
 
 
 
 
344
 
345
  # Extract outputs
346
  for output in outputs:
 
363
  # Handle inference_info tracking (for multi-model comparisons)
364
  inference_entry = {
365
  "model_id": model,
366
+ "model_name": "DoTS.ocr-1.5",
367
  "column_name": output_column,
368
  "timestamp": datetime.now().isoformat(),
369
  "prompt_mode": prompt_mode if not custom_prompt else "custom",
 
444
  card = DatasetCard(card_content)
445
  card.push_to_hub(output_dataset, token=HF_TOKEN)
446
 
447
+ logger.info("DoTS.ocr-1.5 processing complete!")
448
  logger.info(
449
  f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
450
  )
 
466
  # Show example usage if no arguments
467
  if len(sys.argv) == 1:
468
  print("=" * 80)
469
+ print("DoTS.ocr-1.5 Document Processing")
470
  print("=" * 80)
471
+ print("\n3B multilingual OCR model supporting 100+ languages")
472
  print("\nFeatures:")
473
  print("- Multilingual support (100+ languages)")
474
  print("- Fast processing with vLLM")
475
  print("- Table extraction and formatting")
476
  print("- Formula recognition")
477
  print("- Layout-aware text extraction")
478
+ print("- Web screen parsing (NEW in v1.5)")
479
+ print("- Scene text spotting (NEW in v1.5)")
 
480
  print("\nPrompt modes:")
481
+ print(" ocr - Text extraction (default)")
482
+ print(" layout-all - Layout + bboxes + text (JSON)")
483
+ print(" layout-only - Layout + bboxes only (JSON)")
484
+ print(" web-parsing - Webpage layout analysis (JSON)")
485
  print(" scene-spotting - Scene text detection")
486
+ print(" grounding-ocr - Text from bounding box region")
487
+ print(" general - Free-form (use with --custom-prompt)")
 
488
  print("\nExample usage:")
489
  print("\n1. Basic OCR:")
490
+ print(" uv run dots-ocr-1.5.py input-dataset output-dataset")
491
+ print("\n2. Web screen parsing:")
492
+ print(" uv run dots-ocr-1.5.py screenshots parsed --prompt-mode web-parsing")
493
+ print("\n3. Scene text spotting:")
494
+ print(" uv run dots-ocr-1.5.py photos detected --prompt-mode scene-spotting")
 
 
495
  print("\n4. Layout analysis with structure:")
496
+ print(" uv run dots-ocr-1.5.py papers analyzed --prompt-mode layout-all")
497
  print("\n5. Running on HF Jobs:")
498
  print(" hf jobs uv run --flavor l4x1 \\")
499
  print(" -s HF_TOKEN \\")
500
  print(
501
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr-1.5.py \\"
502
  )
503
  print(" input-dataset output-dataset")
504
  print("\n" + "=" * 80)
505
+ print("\nFor full help, run: uv run dots-ocr-1.5.py --help")
506
  sys.exit(0)
507
 
508
  parser = argparse.ArgumentParser(
509
+ description="Document OCR using DoTS.ocr-1.5 (3B multilingual model)",
510
  formatter_class=argparse.RawDescriptionHelpFormatter,
511
  epilog="""
512
+ Prompt Modes (official DoTS.ocr-1.5 prompts):
513
  ocr - Simple text extraction (default)
514
  layout-all - Layout analysis with bboxes, categories, and text (JSON output)
515
  layout-only - Layout detection with bboxes and categories only (JSON output)
516
+ web-parsing - Webpage layout analysis (JSON output) [NEW in v1.5]
517
+ scene-spotting - Scene text detection and recognition [NEW in v1.5]
518
+ grounding-ocr - Extract text from bounding box region [NEW in v1.5]
519
+ general - Free-form QA (use with --custom-prompt) [NEW in v1.5]
 
520
 
521
  SVG Code Generation:
522
+ For SVG output, use --model rednote-hilab/dots.ocr-1.5-svg with:
523
+ --custom-prompt 'Please generate the SVG code based on the image.'
 
524
 
525
  Examples:
526
  # Basic text OCR (default)
527
+ uv run dots-ocr-1.5.py my-docs analyzed-docs
 
 
 
528
 
529
  # Web screen parsing
530
+ uv run dots-ocr-1.5.py screenshots parsed --prompt-mode web-parsing
531
+
532
+ # Scene text spotting
533
+ uv run dots-ocr-1.5.py photos spotted --prompt-mode scene-spotting
534
 
535
  # Full layout analysis with structure
536
+ uv run dots-ocr-1.5.py papers structured --prompt-mode layout-all
537
 
538
  # Random sampling for testing
539
+ uv run dots-ocr-1.5.py large-dataset test --max-samples 50 --shuffle
540
  """,
541
  )
542
 
 
555
  )
556
  parser.add_argument(
557
  "--model",
558
+ default="rednote-hilab/dots.ocr-1.5",
559
+ help="Model to use (default: rednote-hilab/dots.ocr-1.5)",
560
  )
561
  parser.add_argument(
562
  "--max-model-len",
firered-ocr.py CHANGED
@@ -104,10 +104,10 @@ def make_ocr_message(
104
  # Convert to RGB
105
  pil_img = pil_img.convert("RGB")
106
 
107
- # Convert to base64 data URI (JPEG is faster than PNG for encoding)
108
  buf = io.BytesIO()
109
- pil_img.save(buf, format="JPEG", quality=95)
110
- data_uri = f"data:image/jpeg;base64,{base64.b64encode(buf.getvalue()).decode()}"
111
 
112
  # Return message in vLLM format
113
  return [
@@ -228,7 +228,7 @@ def main(
228
  image_column: str = "image",
229
  batch_size: int = 16,
230
  model: str = "FireRedTeam/FireRed-OCR",
231
- max_model_len: int = 32768,
232
  max_tokens: int = 8192,
233
  gpu_memory_utilization: float = 0.8,
234
  hf_token: str = None,
@@ -335,10 +335,7 @@ def main(
335
  processing_duration = datetime.now() - start_time
336
  processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
337
 
338
- # Add output column to dataset (remove existing column if present)
339
- if output_column in dataset.column_names:
340
- logger.info(f"Removing existing '{output_column}' column before adding new results")
341
- dataset = dataset.remove_columns([output_column])
342
  logger.info(f"Adding '{output_column}' column to dataset")
343
  dataset = dataset.add_column(output_column, all_outputs)
344
 
@@ -483,8 +480,8 @@ Examples:
483
  parser.add_argument(
484
  "--max-model-len",
485
  type=int,
486
- default=32768,
487
- help="Maximum model context length (default: 32768)",
488
  )
489
  parser.add_argument(
490
  "--max-tokens",
 
104
  # Convert to RGB
105
  pil_img = pil_img.convert("RGB")
106
 
107
+ # Convert to base64 data URI
108
  buf = io.BytesIO()
109
+ pil_img.save(buf, format="PNG")
110
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
111
 
112
  # Return message in vLLM format
113
  return [
 
228
  image_column: str = "image",
229
  batch_size: int = 16,
230
  model: str = "FireRedTeam/FireRed-OCR",
231
+ max_model_len: int = 8192,
232
  max_tokens: int = 8192,
233
  gpu_memory_utilization: float = 0.8,
234
  hf_token: str = None,
 
335
  processing_duration = datetime.now() - start_time
336
  processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
337
 
338
+ # Add output column to dataset
 
 
 
339
  logger.info(f"Adding '{output_column}' column to dataset")
340
  dataset = dataset.add_column(output_column, all_outputs)
341
 
 
480
  parser.add_argument(
481
  "--max-model-len",
482
  type=int,
483
+ default=8192,
484
+ help="Maximum model context length (default: 8192)",
485
  )
486
  parser.add_argument(
487
  "--max-tokens",
glm-ocr-bucket.py DELETED
@@ -1,364 +0,0 @@
1
- # /// script
2
- # requires-python = ">=3.11"
3
- # dependencies = [
4
- # "pillow",
5
- # "pymupdf",
6
- # "vllm",
7
- # "torch",
8
- # ]
9
- #
10
- # [[tool.uv.index]]
11
- # url = "https://wheels.vllm.ai/nightly/cu129"
12
- #
13
- # [tool.uv]
14
- # prerelease = "allow"
15
- # override-dependencies = ["transformers>=5.1.0"]
16
- # ///
17
-
18
- """
19
- OCR images and PDFs from a directory using GLM-OCR, writing markdown files.
20
-
21
- Designed to work with HF Buckets mounted as volumes via `hf jobs uv run -v ...`
22
- (requires huggingface_hub with PR #3936 volume mounting support).
23
-
24
- The script reads images/PDFs from INPUT_DIR, runs GLM-OCR via vLLM, and writes
25
- one .md file per image (or per PDF page) to OUTPUT_DIR, preserving directory structure.
26
-
27
- Input: Output:
28
- /input/page1.png → /output/page1.md
29
- /input/report.pdf → /output/report/page_001.md
30
- (3 pages) /output/report/page_002.md
31
- /output/report/page_003.md
32
- /input/sub/photo.jpg → /output/sub/photo.md
33
-
34
- Examples:
35
-
36
- # Local test
37
- uv run glm-ocr-bucket.py ./test-images ./test-output
38
-
39
- # HF Jobs with bucket volumes (PR #3936)
40
- hf jobs uv run --flavor l4x1 \\
41
- -s HF_TOKEN \\
42
- -v bucket/user/ocr-input:/input:ro \\
43
- -v bucket/user/ocr-output:/output \\
44
- glm-ocr-bucket.py /input /output
45
-
46
- Model: zai-org/GLM-OCR (0.9B, 94.62% OmniDocBench V1.5, MIT licensed)
47
- """
48
-
49
- import argparse
50
- import base64
51
- import io
52
- import logging
53
- import sys
54
- import time
55
- from pathlib import Path
56
-
57
- import torch
58
- from PIL import Image
59
- from vllm import LLM, SamplingParams
60
-
61
- logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
62
- logger = logging.getLogger(__name__)
63
-
64
- MODEL = "zai-org/GLM-OCR"
65
-
66
- TASK_PROMPTS = {
67
- "ocr": "Text Recognition:",
68
- "formula": "Formula Recognition:",
69
- "table": "Table Recognition:",
70
- }
71
-
72
- IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".tiff", ".tif", ".bmp", ".webp"}
73
-
74
-
75
- def check_cuda_availability():
76
- if not torch.cuda.is_available():
77
- logger.error("CUDA is not available. This script requires a GPU.")
78
- sys.exit(1)
79
- logger.info(f"CUDA available. GPU: {torch.cuda.get_device_name(0)}")
80
-
81
-
82
- def make_ocr_message(image: Image.Image, task: str = "ocr") -> list[dict]:
83
- """Create chat message for GLM-OCR from a PIL Image."""
84
- image = image.convert("RGB")
85
- buf = io.BytesIO()
86
- image.save(buf, format="PNG")
87
- data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
88
-
89
- return [
90
- {
91
- "role": "user",
92
- "content": [
93
- {"type": "image_url", "image_url": {"url": data_uri}},
94
- {"type": "text", "text": TASK_PROMPTS.get(task, TASK_PROMPTS["ocr"])},
95
- ],
96
- }
97
- ]
98
-
99
-
100
- def discover_files(input_dir: Path) -> list[Path]:
101
- """Walk input_dir recursively, returning sorted list of image and PDF files."""
102
- files = []
103
- for path in sorted(input_dir.rglob("*")):
104
- if not path.is_file():
105
- continue
106
- ext = path.suffix.lower()
107
- if ext in IMAGE_EXTENSIONS or ext == ".pdf":
108
- files.append(path)
109
- return files
110
-
111
-
112
- def prepare_images(
113
- files: list[Path], input_dir: Path, output_dir: Path, pdf_dpi: int
114
- ) -> list[tuple[Image.Image, Path]]:
115
- """
116
- Convert discovered files into (PIL.Image, output_md_path) pairs.
117
-
118
- Images map 1:1. PDFs expand to one image per page in a subdirectory.
119
- """
120
- import fitz # pymupdf
121
-
122
- items: list[tuple[Image.Image, Path]] = []
123
-
124
- for file_path in files:
125
- rel = file_path.relative_to(input_dir)
126
- ext = file_path.suffix.lower()
127
-
128
- if ext == ".pdf":
129
- # PDF → one .md per page in a subdirectory named after the PDF
130
- pdf_output_dir = output_dir / rel.with_suffix("")
131
- try:
132
- doc = fitz.open(file_path)
133
- num_pages = len(doc)
134
- logger.info(f"PDF: {rel} ({num_pages} pages)")
135
- for page_num in range(num_pages):
136
- page = doc[page_num]
137
- # Render at specified DPI
138
- zoom = pdf_dpi / 72.0
139
- mat = fitz.Matrix(zoom, zoom)
140
- pix = page.get_pixmap(matrix=mat)
141
- img = Image.frombytes("RGB", [pix.width, pix.height], pix.samples)
142
- md_path = pdf_output_dir / f"page_{page_num + 1:03d}.md"
143
- items.append((img, md_path))
144
- doc.close()
145
- except Exception as e:
146
- logger.error(f"Failed to open PDF {rel}: {e}")
147
- else:
148
- # Image → single .md
149
- try:
150
- img = Image.open(file_path).convert("RGB")
151
- md_path = output_dir / rel.with_suffix(".md")
152
- items.append((img, md_path))
153
- except Exception as e:
154
- logger.error(f"Failed to open image {rel}: {e}")
155
-
156
- return items
157
-
158
-
159
- def main():
160
- parser = argparse.ArgumentParser(
161
- description="OCR images/PDFs from a directory using GLM-OCR, output markdown files.",
162
- formatter_class=argparse.RawDescriptionHelpFormatter,
163
- epilog="""
164
- Task modes:
165
- ocr Text recognition to markdown (default)
166
- formula LaTeX formula recognition
167
- table Table extraction (HTML)
168
-
169
- Examples:
170
- uv run glm-ocr-bucket.py ./images ./output
171
- uv run glm-ocr-bucket.py /input /output --task table --pdf-dpi 200
172
-
173
- HF Jobs with bucket volumes (requires huggingface_hub PR #3936):
174
- hf jobs uv run --flavor l4x1 -s HF_TOKEN \\
175
- -v bucket/user/input-bucket:/input:ro \\
176
- -v bucket/user/output-bucket:/output \\
177
- glm-ocr-bucket.py /input /output
178
- """,
179
- )
180
- parser.add_argument("input_dir", help="Directory containing images and/or PDFs")
181
- parser.add_argument("output_dir", help="Directory to write markdown output files")
182
- parser.add_argument(
183
- "--task",
184
- choices=["ocr", "formula", "table"],
185
- default="ocr",
186
- help="OCR task mode (default: ocr)",
187
- )
188
- parser.add_argument(
189
- "--batch-size", type=int, default=16, help="Batch size for vLLM (default: 16)"
190
- )
191
- parser.add_argument(
192
- "--max-model-len",
193
- type=int,
194
- default=8192,
195
- help="Max model context length (default: 8192)",
196
- )
197
- parser.add_argument(
198
- "--max-tokens",
199
- type=int,
200
- default=8192,
201
- help="Max output tokens (default: 8192)",
202
- )
203
- parser.add_argument(
204
- "--gpu-memory-utilization",
205
- type=float,
206
- default=0.8,
207
- help="GPU memory utilization (default: 0.8)",
208
- )
209
- parser.add_argument(
210
- "--pdf-dpi",
211
- type=int,
212
- default=300,
213
- help="DPI for PDF page rendering (default: 300)",
214
- )
215
- parser.add_argument(
216
- "--temperature",
217
- type=float,
218
- default=0.01,
219
- help="Sampling temperature (default: 0.01)",
220
- )
221
- parser.add_argument(
222
- "--top-p", type=float, default=0.00001, help="Top-p sampling (default: 0.00001)"
223
- )
224
- parser.add_argument(
225
- "--repetition-penalty",
226
- type=float,
227
- default=1.1,
228
- help="Repetition penalty (default: 1.1)",
229
- )
230
- parser.add_argument(
231
- "--verbose",
232
- action="store_true",
233
- help="Print resolved package versions",
234
- )
235
-
236
- args = parser.parse_args()
237
-
238
- check_cuda_availability()
239
-
240
- input_dir = Path(args.input_dir)
241
- output_dir = Path(args.output_dir)
242
-
243
- if not input_dir.is_dir():
244
- logger.error(f"Input directory does not exist: {input_dir}")
245
- sys.exit(1)
246
-
247
- output_dir.mkdir(parents=True, exist_ok=True)
248
-
249
- # Discover and prepare
250
- start_time = time.time()
251
-
252
- logger.info(f"Scanning {input_dir} for images and PDFs...")
253
- files = discover_files(input_dir)
254
- if not files:
255
- logger.error(f"No image or PDF files found in {input_dir}")
256
- sys.exit(1)
257
-
258
- pdf_count = sum(1 for f in files if f.suffix.lower() == ".pdf")
259
- img_count = len(files) - pdf_count
260
- logger.info(f"Found {img_count} image(s) and {pdf_count} PDF(s)")
261
-
262
- logger.info("Preparing images (rendering PDFs)...")
263
- items = prepare_images(files, input_dir, output_dir, args.pdf_dpi)
264
- if not items:
265
- logger.error("No processable images after preparation")
266
- sys.exit(1)
267
-
268
- logger.info(f"Total images to OCR: {len(items)}")
269
-
270
- # Init vLLM
271
- logger.info(f"Initializing vLLM with {MODEL}...")
272
- llm = LLM(
273
- model=MODEL,
274
- trust_remote_code=True,
275
- max_model_len=args.max_model_len,
276
- gpu_memory_utilization=args.gpu_memory_utilization,
277
- limit_mm_per_prompt={"image": 1},
278
- )
279
-
280
- sampling_params = SamplingParams(
281
- temperature=args.temperature,
282
- top_p=args.top_p,
283
- max_tokens=args.max_tokens,
284
- repetition_penalty=args.repetition_penalty,
285
- )
286
-
287
- # Process in batches
288
- errors = 0
289
- processed = 0
290
- total = len(items)
291
-
292
- for batch_start in range(0, total, args.batch_size):
293
- batch_end = min(batch_start + args.batch_size, total)
294
- batch = items[batch_start:batch_end]
295
- batch_num = batch_start // args.batch_size + 1
296
- total_batches = (total + args.batch_size - 1) // args.batch_size
297
-
298
- logger.info(f"Batch {batch_num}/{total_batches} ({processed}/{total} done)")
299
-
300
- try:
301
- messages = [make_ocr_message(img, task=args.task) for img, _ in batch]
302
- outputs = llm.chat(messages, sampling_params)
303
-
304
- for (_, md_path), output in zip(batch, outputs):
305
- text = output.outputs[0].text.strip()
306
- md_path.parent.mkdir(parents=True, exist_ok=True)
307
- md_path.write_text(text, encoding="utf-8")
308
- processed += 1
309
-
310
- except Exception as e:
311
- logger.error(f"Batch {batch_num} failed: {e}")
312
- # Write error markers for failed batch
313
- for _, md_path in batch:
314
- md_path.parent.mkdir(parents=True, exist_ok=True)
315
- md_path.write_text(f"[OCR ERROR: {e}]", encoding="utf-8")
316
- errors += len(batch)
317
- processed += len(batch)
318
-
319
- elapsed = time.time() - start_time
320
- elapsed_str = f"{elapsed / 60:.1f} min" if elapsed > 60 else f"{elapsed:.1f}s"
321
-
322
- logger.info("=" * 50)
323
- logger.info(f"Done! Processed {total} images in {elapsed_str}")
324
- logger.info(f" Output: {output_dir}")
325
- logger.info(f" Errors: {errors}")
326
- if total > 0:
327
- logger.info(f" Speed: {total / elapsed:.2f} images/sec")
328
-
329
- if args.verbose:
330
- import importlib.metadata
331
-
332
- logger.info("--- Package versions ---")
333
- for pkg in ["vllm", "transformers", "torch", "pillow", "pymupdf"]:
334
- try:
335
- logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
336
- except importlib.metadata.PackageNotFoundError:
337
- logger.info(f" {pkg}: not installed")
338
-
339
-
340
- if __name__ == "__main__":
341
- if len(sys.argv) == 1:
342
- print("=" * 60)
343
- print("GLM-OCR Bucket Script")
344
- print("=" * 60)
345
- print("\nOCR images/PDFs from a directory → markdown files.")
346
- print("Designed for HF Buckets mounted as volumes (PR #3936).")
347
- print()
348
- print("Usage:")
349
- print(" uv run glm-ocr-bucket.py INPUT_DIR OUTPUT_DIR")
350
- print()
351
- print("Examples:")
352
- print(" uv run glm-ocr-bucket.py ./images ./output")
353
- print(" uv run glm-ocr-bucket.py /input /output --task table")
354
- print()
355
- print("HF Jobs with bucket volumes:")
356
- print(" hf jobs uv run --flavor l4x1 -s HF_TOKEN \\")
357
- print(" -v bucket/user/ocr-input:/input:ro \\")
358
- print(" -v bucket/user/ocr-output:/output \\")
359
- print(" glm-ocr-bucket.py /input /output")
360
- print()
361
- print("For full help: uv run glm-ocr-bucket.py --help")
362
- sys.exit(0)
363
-
364
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
qianfan-ocr.py DELETED
@@ -1,628 +0,0 @@
1
- # /// script
2
- # requires-python = ">=3.11"
3
- # dependencies = [
4
- # "datasets>=4.0.0",
5
- # "huggingface-hub",
6
- # "pillow",
7
- # "vllm>=0.15.1",
8
- # "tqdm",
9
- # "toolz",
10
- # "torch",
11
- # ]
12
- # ///
13
-
14
- """
15
- Convert document images to markdown using Qianfan-OCR with vLLM.
16
-
17
- Qianfan-OCR is a 4.7B end-to-end document intelligence model from Baidu,
18
- built on InternVL architecture with Qianfan-ViT encoder + Qwen3-4B LLM.
19
-
20
- Features:
21
- - #1 end-to-end model on OmniDocBench v1.5 (93.12) and OlmOCR Bench (79.8)
22
- - Layout-as-Thought: optional reasoning phase for complex layouts via --think
23
- - 192 language support (Latin, CJK, Arabic, Cyrillic, and more)
24
- - Multiple task modes: OCR, table (HTML), formula (LaTeX), chart, scene text
25
- - Key information extraction with custom prompts
26
- - 1.024 PPS on A100 with W8A8 quantization
27
-
28
- Model: baidu/Qianfan-OCR
29
- License: Apache 2.0
30
- Paper: https://arxiv.org/abs/2603.13398
31
- """
32
-
33
- import argparse
34
- import base64
35
- import io
36
- import json
37
- import logging
38
- import os
39
- import sys
40
- import time
41
- from datetime import datetime
42
- from typing import Any, Dict, List, Union
43
-
44
- import torch
45
- from datasets import load_dataset
46
- from huggingface_hub import DatasetCard, login
47
- from PIL import Image
48
- from toolz import partition_all
49
- from tqdm.auto import tqdm
50
- from vllm import LLM, SamplingParams
51
-
52
- logging.basicConfig(level=logging.INFO)
53
- logger = logging.getLogger(__name__)
54
-
55
- MODEL = "baidu/Qianfan-OCR"
56
-
57
- PROMPT_TEMPLATES = {
58
- "ocr": "Parse this document to Markdown.",
59
- "table": "Extract tables to HTML format.",
60
- "formula": "Extract formulas to LaTeX.",
61
- "chart": "What trends are shown in this chart?",
62
- "scene": "Extract all visible text from the image.",
63
- "kie": None, # requires --custom-prompt
64
- }
65
-
66
-
67
- def check_cuda_availability():
68
- """Check if CUDA is available and exit if not."""
69
- if not torch.cuda.is_available():
70
- logger.error("CUDA is not available. This script requires a GPU.")
71
- sys.exit(1)
72
- else:
73
- logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
74
-
75
-
76
- def extract_content_from_thinking(text: str, include_thinking: bool = False) -> str:
77
- """
78
- Extract final content from Qianfan-OCR's Layout-as-Thought output.
79
-
80
- When --think is enabled, the model generates layout analysis inside
81
- <think>...</think> tags before the final markdown output.
82
- """
83
- if include_thinking:
84
- return text.strip()
85
-
86
- # If no thinking tags, return as-is
87
- if "<think>" not in text:
88
- return text.strip()
89
-
90
- # Extract everything after </think>
91
- think_end = text.find("</think>")
92
- if think_end != -1:
93
- return text[think_end + 8 :].strip()
94
-
95
- # Thinking started but never closed — return full text
96
- logger.warning("Found <think> but no </think>, returning full text")
97
- return text.strip()
98
-
99
-
100
- def make_ocr_message(
101
- image: Union[Image.Image, Dict[str, Any], str],
102
- prompt: str,
103
- ) -> List[Dict]:
104
- """Create vLLM chat message with image and prompt."""
105
- if isinstance(image, Image.Image):
106
- pil_img = image
107
- elif isinstance(image, dict) and "bytes" in image:
108
- pil_img = Image.open(io.BytesIO(image["bytes"]))
109
- elif isinstance(image, str):
110
- pil_img = Image.open(image)
111
- else:
112
- raise ValueError(f"Unsupported image type: {type(image)}")
113
-
114
- pil_img = pil_img.convert("RGB")
115
-
116
- buf = io.BytesIO()
117
- pil_img.save(buf, format="PNG")
118
- data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
119
-
120
- return [
121
- {
122
- "role": "user",
123
- "content": [
124
- {"type": "image_url", "image_url": {"url": data_uri}},
125
- {"type": "text", "text": prompt},
126
- ],
127
- }
128
- ]
129
-
130
-
131
- def create_dataset_card(
132
- source_dataset: str,
133
- model: str,
134
- num_samples: int,
135
- processing_time: str,
136
- batch_size: int,
137
- max_model_len: int,
138
- max_tokens: int,
139
- gpu_memory_utilization: float,
140
- prompt_mode: str,
141
- think: bool,
142
- include_thinking: bool,
143
- image_column: str = "image",
144
- split: str = "train",
145
- ) -> str:
146
- """Create a dataset card documenting the OCR process."""
147
- model_name = model.split("/")[-1]
148
-
149
- return f"""---
150
- tags:
151
- - ocr
152
- - document-processing
153
- - qianfan-ocr
154
- - markdown
155
- - uv-script
156
- - generated
157
- ---
158
-
159
- # Document OCR using {model_name}
160
-
161
- This dataset contains OCR results from [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using Qianfan-OCR, Baidu's 4.7B end-to-end document intelligence model.
162
-
163
- ## Processing Details
164
-
165
- - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
166
- - **Model**: [{model}](https://huggingface.co/{model})
167
- - **Number of Samples**: {num_samples:,}
168
- - **Processing Time**: {processing_time}
169
- - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
170
-
171
- ### Configuration
172
-
173
- - **Image Column**: `{image_column}`
174
- - **Output Column**: `markdown`
175
- - **Dataset Split**: `{split}`
176
- - **Batch Size**: {batch_size}
177
- - **Prompt Mode**: {prompt_mode}
178
- - **Layout-as-Thought**: {"Enabled" if think else "Disabled"}
179
- - **Thinking Traces**: {"Included" if include_thinking else "Excluded"}
180
- - **Max Model Length**: {max_model_len:,} tokens
181
- - **Max Output Tokens**: {max_tokens:,}
182
- - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
183
-
184
- ## Model Information
185
-
186
- Qianfan-OCR key capabilities:
187
- - #1 end-to-end model on OmniDocBench v1.5 (93.12)
188
- - #1 on OlmOCR Bench (79.8)
189
- - 192 language support
190
- - Layout-as-Thought reasoning for complex documents
191
- - Document parsing, table extraction, formula recognition, chart understanding
192
- - Key information extraction
193
-
194
- ## Dataset Structure
195
-
196
- The dataset contains all original columns plus:
197
- - `markdown`: The extracted text in markdown format
198
- - `inference_info`: JSON list tracking all OCR models applied
199
-
200
- ## Reproduction
201
-
202
- ```bash
203
- uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \\
204
- {source_dataset} \\
205
- <output-dataset> \\
206
- --image-column {image_column} \\
207
- --prompt-mode {prompt_mode} \\
208
- --batch-size {batch_size}{" --think" if think else ""}
209
- ```
210
-
211
- Generated with [UV Scripts](https://huggingface.co/uv-scripts)
212
- """
213
-
214
-
215
- def main(
216
- input_dataset: str,
217
- output_dataset: str,
218
- image_column: str = "image",
219
- batch_size: int = 8,
220
- max_model_len: int = 16384,
221
- max_tokens: int = 8192,
222
- temperature: float = 0.0,
223
- top_p: float = 1.0,
224
- gpu_memory_utilization: float = 0.85,
225
- hf_token: str = None,
226
- split: str = "train",
227
- max_samples: int = None,
228
- private: bool = False,
229
- shuffle: bool = False,
230
- seed: int = 42,
231
- prompt_mode: str = "ocr",
232
- think: bool = False,
233
- include_thinking: bool = False,
234
- custom_prompt: str = None,
235
- output_column: str = "markdown",
236
- config: str = None,
237
- create_pr: bool = False,
238
- verbose: bool = False,
239
- ):
240
- """Process images from HF dataset through Qianfan-OCR model."""
241
-
242
- check_cuda_availability()
243
- start_time = datetime.now()
244
-
245
- HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
246
- if HF_TOKEN:
247
- login(token=HF_TOKEN)
248
-
249
- # Build prompt
250
- if custom_prompt:
251
- prompt = custom_prompt
252
- logger.info(f"Using custom prompt: {prompt[:80]}...")
253
- else:
254
- if prompt_mode == "kie":
255
- logger.error("--prompt-mode kie requires --custom-prompt")
256
- sys.exit(1)
257
- prompt = PROMPT_TEMPLATES[prompt_mode]
258
- logger.info(f"Using prompt mode: {prompt_mode}")
259
-
260
- if think:
261
- prompt = prompt + "<think>"
262
- logger.info("Layout-as-Thought enabled (appending <think> to prompt)")
263
-
264
- logger.info(f"Using model: {MODEL}")
265
-
266
- # Load dataset
267
- logger.info(f"Loading dataset: {input_dataset}")
268
- dataset = load_dataset(input_dataset, split=split)
269
-
270
- if image_column not in dataset.column_names:
271
- raise ValueError(
272
- f"Column '{image_column}' not found. Available: {dataset.column_names}"
273
- )
274
-
275
- if shuffle:
276
- logger.info(f"Shuffling dataset with seed {seed}")
277
- dataset = dataset.shuffle(seed=seed)
278
-
279
- if max_samples:
280
- dataset = dataset.select(range(min(max_samples, len(dataset))))
281
- logger.info(f"Limited to {len(dataset)} samples")
282
-
283
- # Initialize vLLM
284
- logger.info("Initializing vLLM with Qianfan-OCR")
285
- logger.info("This may take a few minutes on first run...")
286
- llm = LLM(
287
- model=MODEL,
288
- trust_remote_code=True,
289
- max_model_len=max_model_len,
290
- gpu_memory_utilization=gpu_memory_utilization,
291
- limit_mm_per_prompt={"image": 1},
292
- enforce_eager=False,
293
- )
294
-
295
- sampling_params = SamplingParams(
296
- temperature=temperature,
297
- top_p=top_p,
298
- max_tokens=max_tokens,
299
- )
300
-
301
- logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
302
- logger.info(f"Output will be written to column: {output_column}")
303
-
304
- # Process images in batches
305
- all_outputs = []
306
-
307
- for batch_indices in tqdm(
308
- partition_all(batch_size, range(len(dataset))),
309
- total=(len(dataset) + batch_size - 1) // batch_size,
310
- desc="Qianfan-OCR processing",
311
- ):
312
- batch_indices = list(batch_indices)
313
- batch_images = [dataset[i][image_column] for i in batch_indices]
314
-
315
- try:
316
- batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
317
- outputs = llm.chat(batch_messages, sampling_params)
318
-
319
- for output in outputs:
320
- text = output.outputs[0].text.strip()
321
- if think:
322
- text = extract_content_from_thinking(text, include_thinking)
323
- all_outputs.append(text)
324
-
325
- except Exception as e:
326
- logger.error(f"Error processing batch: {e}")
327
- all_outputs.extend(["[OCR ERROR]"] * len(batch_images))
328
-
329
- # Calculate processing time
330
- processing_duration = datetime.now() - start_time
331
- processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
332
-
333
- # Add output column
334
- logger.info(f"Adding '{output_column}' column to dataset")
335
- dataset = dataset.add_column(output_column, all_outputs)
336
-
337
- # Handle inference_info tracking
338
- inference_entry = {
339
- "model_id": MODEL,
340
- "model_name": "Qianfan-OCR",
341
- "column_name": output_column,
342
- "timestamp": datetime.now().isoformat(),
343
- "prompt_mode": prompt_mode if not custom_prompt else "custom",
344
- "think": think,
345
- "temperature": temperature,
346
- "max_tokens": max_tokens,
347
- }
348
-
349
- if "inference_info" in dataset.column_names:
350
- logger.info("Updating existing inference_info column")
351
-
352
- def update_inference_info(example):
353
- try:
354
- existing_info = (
355
- json.loads(example["inference_info"])
356
- if example["inference_info"]
357
- else []
358
- )
359
- except (json.JSONDecodeError, TypeError):
360
- existing_info = []
361
- existing_info.append(inference_entry)
362
- return {"inference_info": json.dumps(existing_info)}
363
-
364
- dataset = dataset.map(update_inference_info)
365
- else:
366
- logger.info("Creating new inference_info column")
367
- inference_list = [json.dumps([inference_entry])] * len(dataset)
368
- dataset = dataset.add_column("inference_info", inference_list)
369
-
370
- # Push to hub with retry and XET fallback
371
- logger.info(f"Pushing to {output_dataset}")
372
- commit_msg = f"Add Qianfan-OCR results ({len(dataset)} samples)" + (
373
- f" [{config}]" if config else ""
374
- )
375
- max_retries = 3
376
- for attempt in range(1, max_retries + 1):
377
- try:
378
- if attempt > 1:
379
- logger.warning("Disabling XET (fallback to HTTP upload)")
380
- os.environ["HF_HUB_DISABLE_XET"] = "1"
381
- dataset.push_to_hub(
382
- output_dataset,
383
- private=private,
384
- token=HF_TOKEN,
385
- max_shard_size="500MB",
386
- **({"config_name": config} if config else {}),
387
- create_pr=create_pr,
388
- commit_message=commit_msg,
389
- )
390
- break
391
- except Exception as e:
392
- logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
393
- if attempt < max_retries:
394
- delay = 30 * (2 ** (attempt - 1))
395
- logger.info(f"Retrying in {delay}s...")
396
- time.sleep(delay)
397
- else:
398
- logger.error("All upload attempts failed. OCR results are lost.")
399
- sys.exit(1)
400
-
401
- # Create and push dataset card (skip when creating PR to avoid conflicts)
402
- if not create_pr:
403
- logger.info("Creating dataset card")
404
- card_content = create_dataset_card(
405
- source_dataset=input_dataset,
406
- model=MODEL,
407
- num_samples=len(dataset),
408
- processing_time=processing_time_str,
409
- batch_size=batch_size,
410
- max_model_len=max_model_len,
411
- max_tokens=max_tokens,
412
- gpu_memory_utilization=gpu_memory_utilization,
413
- prompt_mode=prompt_mode if not custom_prompt else "custom",
414
- think=think,
415
- include_thinking=include_thinking,
416
- image_column=image_column,
417
- split=split,
418
- )
419
- card = DatasetCard(card_content)
420
- card.push_to_hub(output_dataset, token=HF_TOKEN)
421
-
422
- logger.info("Qianfan-OCR processing complete!")
423
- logger.info(
424
- f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
425
- )
426
- logger.info(f"Processing time: {processing_time_str}")
427
- logger.info(
428
- f"Processing speed: {len(dataset) / processing_duration.total_seconds():.2f} images/sec"
429
- )
430
-
431
- if verbose:
432
- import importlib.metadata
433
-
434
- logger.info("--- Resolved package versions ---")
435
- for pkg in ["vllm", "transformers", "torch", "datasets", "pyarrow", "pillow"]:
436
- try:
437
- logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
438
- except importlib.metadata.PackageNotFoundError:
439
- logger.info(f" {pkg}: not installed")
440
- logger.info("--- End versions ---")
441
-
442
-
443
- if __name__ == "__main__":
444
- if len(sys.argv) == 1:
445
- print("=" * 80)
446
- print("Qianfan-OCR - End-to-End Document Intelligence")
447
- print("=" * 80)
448
- print("\n4.7B model from Baidu, #1 on OmniDocBench v1.5 (93.12)")
449
- print("\nFeatures:")
450
- print("- #1 end-to-end model on OmniDocBench v1.5 and OlmOCR Bench")
451
- print("- Layout-as-Thought reasoning for complex documents (--think)")
452
- print("- 192 language support")
453
- print("- Multiple modes: OCR, table (HTML), formula (LaTeX), chart, scene text")
454
- print("- Key information extraction with custom prompts")
455
- print("\nExample usage:")
456
- print("\n1. Basic OCR:")
457
- print(" uv run qianfan-ocr.py input-dataset output-dataset")
458
- print("\n2. With Layout-as-Thought (complex documents):")
459
- print(" uv run qianfan-ocr.py docs output --think")
460
- print("\n3. Table extraction:")
461
- print(" uv run qianfan-ocr.py docs output --prompt-mode table")
462
- print("\n4. Formula extraction:")
463
- print(" uv run qianfan-ocr.py docs output --prompt-mode formula")
464
- print("\n5. Key information extraction:")
465
- print(
466
- ' uv run qianfan-ocr.py invoices output --prompt-mode kie --custom-prompt "Extract: name, date, total. Output JSON."'
467
- )
468
- print("\n6. Running on HF Jobs:")
469
- print(" hf jobs uv run --flavor l4x1 \\")
470
- print(" -s HF_TOKEN \\")
471
- print(
472
- " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \\"
473
- )
474
- print(" input-dataset output-dataset --max-samples 10")
475
- print("\nFor full help, run: uv run qianfan-ocr.py --help")
476
- sys.exit(0)
477
-
478
- parser = argparse.ArgumentParser(
479
- description="Document OCR using Qianfan-OCR (4.7B, #1 on OmniDocBench v1.5)",
480
- formatter_class=argparse.RawDescriptionHelpFormatter,
481
- epilog="""
482
- Prompt modes:
483
- ocr Document parsing to Markdown (default)
484
- table Table extraction to HTML format
485
- formula Formula recognition to LaTeX
486
- chart Chart understanding and analysis
487
- scene Scene text extraction
488
- kie Key information extraction (requires --custom-prompt)
489
-
490
- Examples:
491
- uv run qianfan-ocr.py my-docs analyzed-docs
492
- uv run qianfan-ocr.py docs output --think --max-samples 50
493
- uv run qianfan-ocr.py docs output --prompt-mode table
494
- uv run qianfan-ocr.py invoices data --prompt-mode kie --custom-prompt "Extract: name, date, total."
495
- """,
496
- )
497
-
498
- parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
499
- parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
500
- parser.add_argument(
501
- "--image-column",
502
- default="image",
503
- help="Column containing images (default: image)",
504
- )
505
- parser.add_argument(
506
- "--batch-size",
507
- type=int,
508
- default=8,
509
- help="Batch size for processing (default: 8)",
510
- )
511
- parser.add_argument(
512
- "--max-model-len",
513
- type=int,
514
- default=16384,
515
- help="Maximum model context length (default: 16384, reduce to 8192 if OOM on L4)",
516
- )
517
- parser.add_argument(
518
- "--max-tokens",
519
- type=int,
520
- default=8192,
521
- help="Maximum tokens to generate (default: 8192)",
522
- )
523
- parser.add_argument(
524
- "--temperature",
525
- type=float,
526
- default=0.0,
527
- help="Sampling temperature (default: 0.0, deterministic)",
528
- )
529
- parser.add_argument(
530
- "--top-p",
531
- type=float,
532
- default=1.0,
533
- help="Top-p sampling parameter (default: 1.0)",
534
- )
535
- parser.add_argument(
536
- "--gpu-memory-utilization",
537
- type=float,
538
- default=0.85,
539
- help="GPU memory utilization (default: 0.85)",
540
- )
541
- parser.add_argument("--hf-token", help="Hugging Face API token")
542
- parser.add_argument(
543
- "--split", default="train", help="Dataset split to use (default: train)"
544
- )
545
- parser.add_argument(
546
- "--max-samples",
547
- type=int,
548
- help="Maximum number of samples to process (for testing)",
549
- )
550
- parser.add_argument(
551
- "--private", action="store_true", help="Make output dataset private"
552
- )
553
- parser.add_argument(
554
- "--shuffle", action="store_true", help="Shuffle dataset before processing"
555
- )
556
- parser.add_argument(
557
- "--seed",
558
- type=int,
559
- default=42,
560
- help="Random seed for shuffling (default: 42)",
561
- )
562
- parser.add_argument(
563
- "--prompt-mode",
564
- choices=list(PROMPT_TEMPLATES.keys()),
565
- default="ocr",
566
- help="Prompt mode (default: ocr)",
567
- )
568
- parser.add_argument(
569
- "--think",
570
- action="store_true",
571
- help="Enable Layout-as-Thought reasoning (appends <think> to prompt)",
572
- )
573
- parser.add_argument(
574
- "--include-thinking",
575
- action="store_true",
576
- help="Include thinking traces in output (default: only final content)",
577
- )
578
- parser.add_argument(
579
- "--custom-prompt",
580
- help="Custom prompt text (overrides --prompt-mode)",
581
- )
582
- parser.add_argument(
583
- "--output-column",
584
- default="markdown",
585
- help="Column name for output text (default: markdown)",
586
- )
587
- parser.add_argument(
588
- "--config",
589
- help="Config/subset name when pushing to Hub (for benchmarking multiple models)",
590
- )
591
- parser.add_argument(
592
- "--create-pr",
593
- action="store_true",
594
- help="Create a pull request instead of pushing directly",
595
- )
596
- parser.add_argument(
597
- "--verbose",
598
- action="store_true",
599
- help="Log resolved package versions after processing",
600
- )
601
-
602
- args = parser.parse_args()
603
-
604
- main(
605
- input_dataset=args.input_dataset,
606
- output_dataset=args.output_dataset,
607
- image_column=args.image_column,
608
- batch_size=args.batch_size,
609
- max_model_len=args.max_model_len,
610
- max_tokens=args.max_tokens,
611
- temperature=args.temperature,
612
- top_p=args.top_p,
613
- gpu_memory_utilization=args.gpu_memory_utilization,
614
- hf_token=args.hf_token,
615
- split=args.split,
616
- max_samples=args.max_samples,
617
- private=args.private,
618
- shuffle=args.shuffle,
619
- seed=args.seed,
620
- prompt_mode=args.prompt_mode,
621
- think=args.think,
622
- include_thinking=args.include_thinking,
623
- custom_prompt=args.custom_prompt,
624
- output_column=args.output_column,
625
- config=args.config,
626
- create_pr=args.create_pr,
627
- verbose=args.verbose,
628
- )