GlotOCR Bench: OCR Models Still Struggle Beyond a Handful of Unicode Scripts
Abstract
Vision-language models show limited generalization in OCR across diverse scripts, with performance closely tied to pretraining coverage and struggling with unfamiliar writing systems.
Optical character recognition (OCR) has advanced rapidly with the rise of vision-language models, yet evaluation has remained concentrated on a small cluster of high- and mid-resource scripts. We introduce GlotOCR Bench, a comprehensive benchmark evaluating OCR generalization across 100+ Unicode scripts. Our benchmark comprises clean and degraded image variants rendered from real multilingual texts. Images are rendered using fonts from the Google Fonts repository, shaped with HarfBuzz and rasterized with FreeType, supporting both LTR and RTL scripts. Samples of rendered images were manually reviewed to verify correct rendering across all scripts. We evaluate a broad suite of open-weight and proprietary vision-language models and find that most perform well on fewer than ten scripts, and even the strongest frontier models fail to generalize beyond thirty scripts. Performance broadly tracks script-level pretraining coverage, suggesting that current OCR systems rely on language model pretraining as much as on visual recognition. Models confronted with unfamiliar scripts either produce random noise or hallucinate characters from similar scripts they already know. We release the benchmark and pipeline for reproducibility. Pipeline Code: https://github.com/cisnlp/glotocr-bench, Benchmark: https://hf.co/datasets/cis-lmu/glotocr-bench.
Community
GlotOCR Bench is a benchmark for evaluating OCR for different Unicode scripts.
Benchmark: https://huggingface.co/datasets/cis-lmu/GlotOCR-bench
Results: https://huggingface.co/datasets/cis-lmu/GlotOCR-bench-v1.0-results
For Tifinagh, all results are zeros except one 1% SA by Gemini 😂
Great work 👏
I’ve uploaded the full results with CER here:
https://github.com/cisnlp/GlotOCR-bench/releases/download/v1-arxiv/res_v1.0.zip
I’m also gradually adding them per model to Hugging Face:
https://huggingface.co/datasets/cis-lmu/GlotOCR-bench-v1.0-results
The zip file includes both per-sample CER results and aggregated metrics.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- KazakhOCR: A Synthetic Benchmark for Evaluating Multimodal Models in Low-Resource Kazakh Script OCR (2026)
- Efficient Domain Adaptation for Text Line Recognition via Decoupled Language Models (2026)
- JaWildText: A Benchmark for Vision-Language Models on Japanese Scene Text Understanding (2026)
- AtlasOCR: Building the First Open-Source Darija OCR Model with Vision Language Models (2026)
- Designing Production-Scale OCR for India: Multilingual and Domain-Specific Systems (2026)
- ZeroSense:How Vision matters in Long Context Compression (2026)
- MDPBench: A Benchmark for Multilingual Document Parsing in Real-World Scenarios (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.12978 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper