Datasets:
The dataset viewer is not available for this dataset.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SSL-HWD (A Large Scale Handwritten Image Dataset)
Dataset Description
Dataset Summary
SSL-HWD is a large-scale handwritten text dataset introduced in the paper "Learning Beyond Labels: Self-Supervised Handwritten Text Recognition" (WACV 2026). The dataset comprises 10 million word-level handwritten images from 852 writers across diverse domains including Physics, Computer Science, Biology, Mathematics, and more.
The dataset is specifically designed to support self-supervised learning approaches for Handwritten Text Recognition (HTR), addressing the critical challenge of reducing dependence on large volumes of labeled data.
Dataset Composition
- Total Words: 10 million word-level images
- Writers: 852 unique contributors
- Pages: 81,280 scanned document pages
- Domains: 20+ domains including sciences, literature, and mathematics
- Labeled Subset: 2.08M words (20.8%) with ground truth transcriptions
- Unlabeled Subset: 7.92M words (79.2%) for self-supervised pretraining
- Unique Vocabulary: 107,813 unique words
Key Features
Diversity and Complexity: The dataset includes challenging real-world scenarios:
- Texts with different font colors (varying ink and pen usage)
- Texts with difficult backgrounds (lines, noise interference)
- Texts with distorted characters (irregular strokes, structural inconsistencies)
- Texts with blurring effects (motion or focus issues)
- Texts with highlighted backgrounds (color markings obscuring content)
Quality Assurance:
- All samples automatically annotated using Amazon Textract with confidence β₯99%
- Labeled subset manually verified by language experts
- High-quality word segmentation with precise bounding boxes
Comparison with Existing Datasets
| Dataset | Pages | Writers | Words | Unique Words |
|---|---|---|---|---|
| IAM | 1.5K | 657 | 115K | 10.5K |
| GNHK | 687 | - | 39K | 12.3K |
| IIIT-HW-English-Word | 20.8K | 1,215 | 757K | 174K |
| SSL-HWD (Ours) | 81.2K | 852 | 10M | 107K |
Vocabulary Distribution
| Category | SSL-HWD |
|---|---|
| Alphabetic Words | 61,088 |
| Numeric Words | 4,981 |
| Stop-words | 457 |
| Other Words | 41,287 |
| Total Unique | 107,813 |
Dataset Structure
Data Organization
SSL-HWD/
βββ labeled/ # 2.08M labeled samples
β βββ writer1/
β β βββ writer1.csv # Ground truth transcriptions
β β βββ writer1_1.png
β β βββ writer1_2.png
β β βββ ...
β βββ writer2/
β βββ ...
βββ unlabeled/ # 7.92M unlabeled samples
βββ writer1/
β βββ writer1_1.png
β βββ writer1_2.png
β βββ ...
βββ ...
Data Fields
CSV Format (for labeled data):
image_filename(string): Name of the word image filetranscription(string): Ground truth text transcription
Example:
image_filename,transcription
writer1_1.png,handwritten
writer1_2.png,recognition
writer1_3.png,dataset
Data Splits
The labeled subset (2.08M samples) is divided as follows:
- Training: 60% (1.25M samples)
- Testing: 40% (0.83M samples)
The unlabeled subset (7.92M samples) is used for self-supervised pretraining.
Supported Tasks
1. Handwriting Text Recognition (HTR)
Train models to recognize handwritten text from word images.
2. Self-Supervised Learning
Use the large unlabeled subset for pretraining with methods like contrastive learning.
3. Writer Identification
Identify writers based on handwriting characteristics (852 unique writers).
4. Domain Adaptation
Transfer learning across different handwriting styles and domains.
5. Semi-Supervised Learning
Combine small labeled and large unlabeled subsets for improved performance.
Usage
Loading the Dataset
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("your-username/ssl-hwd")
# Access labeled data
labeled = dataset['labeled']
# Access unlabeled data for self-supervised learning
unlabeled = dataset['unlabeled']
Basic Example
from PIL import Image
import pandas as pd
from pathlib import Path
# Load a writer's data
writer_folder = Path("labeled/writer1")
df = pd.read_csv(writer_folder / "writer1.csv")
# Load first image and transcription
img_name = df.iloc[0]['image_filename']
transcription = df.iloc[0]['transcription']
image = Image.open(writer_folder / img_name)
print(f"Transcription: {transcription}")
PyTorch DataLoader
import torch
from torch.utils.data import Dataset, DataLoader
from PIL import Image
import pandas as pd
class SSLHWDDataset(Dataset):
def __init__(self, writer_folder, transform=None):
self.folder = Path(writer_folder)
csv_file = list(self.folder.glob("*.csv"))[0]
self.data = pd.read_csv(csv_file)
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img_name = self.data.iloc[idx]['image_filename']
transcription = self.data.iloc[idx]['transcription']
img_path = self.folder / img_name
image = Image.open(img_path).convert('RGB')
if self.transform:
image = self.transform(image)
return image, transcription
# Usage
dataset = SSLHWDDataset('labeled/writer1')
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
Benchmark Results
State-of-the-Art Performance (from LoGo-HTR paper)
IAM Dataset:
- With 80% labeled data: WER 11.93%, CER 2.31%
- With 100% labeled data: WER 10.27%, CER 2.01%
GNHK Dataset:
- With 80% labeled data: WER 19.69%, CER 9.05%
- With 100% labeled data: WER 12.07%, CER 7.20%
RIMES Dataset:
- With 80% labeled data: WER 6.15%, CER 1.89%
- With 100% labeled data: WER 5.50%, CER 1.78%
LAM Dataset (line-level):
- With 80% labeled data: WER 7.2%, CER 3.2%
- With 100% labeled data: WER 6.3%, CER 2.39%
Cross-Dataset Generalization
The dataset demonstrates strong cross-dataset transfer capabilities:
- SSL-HWD β IAM: WER 13.2%, CER 2.9%
- SSL-HWD β GNHK: WER 10.1%, CER 6.8%
- SSL-HWD β RIMES: WER 11.2%, CER 3.5%
- SSL-HWD β LAM: WER 16.4%, CER 7.2%
Dataset Creation
Source Data
The dataset was curated from publicly available digitized manuscripts from web sources, selected for being fully or substantially handwritten. Documents span:
- Personal diaries
- Academic notes
- Historical correspondence
- Scientific manuscripts
- Mathematical writings
- Literature and more
Data Quality
- Diverse Sources: 852 unique writers across 20+ domains
- Real-world Challenges: Includes blur, noise, distortions, and background interference
Applications
Self-Supervised Learning (Primary Use)
Use the 7.92M unlabeled samples for pretraining with methods like:
- Contrastive learning (SimCLR, MoCo)
- Masked image modeling
- Local-global objectives (as in LoGo-HTR)
Semi-Supervised Learning
Combine labeled and unlabeled subsets for improved performance with limited annotations.
Few-Shot Learning
Train models with minimal labeled data by leveraging pretrained representations.
Transfer Learning
Pretrain on SSL-HWD and fine-tune on domain-specific datasets.
Limitations and Considerations
Known Limitations
- Language: Primarily English handwritten text
- Geographic Bias: Predominantly Western handwriting styles
- Historical Period: Concentrated in specific time periods
- Domain Coverage: While diverse, may not represent all handwriting variations
Ethical Considerations
- Dataset contains historical documents and handwritten materials
- Personal information may be present in some samples
- Users should be aware of privacy considerations when using this data
Citation
If you use the SSL-HWD dataset in your research, please cite:
@inproceedings{mitra2026learning,
title={Learning Beyond Labels: Self-Supervised Handwritten Text Recognition},
author={Mitra, Shree and Mondal, Ajoy and Jawahar, C. V.},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year={2026}
}
Additional Resources
- Project Website: https://logo-ssl.github.io/
License
This dataset is released under the Apache License 2.0.
Acknowledgments
This work is supported by MeitY, Government of India, through the NLTM-Bhashini project.
Contact
For questions or issues regarding the dataset:
- Authors: Shree Mitra, Ajoy Mondal, C.V. Jawahar
- Institution: IIIT Hyderabad
- Email: [email protected]
Dataset Version: 1.0
Last Updated: January 2026
Status: Active
- Downloads last month
- 14