You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

By requesting access you agree to the CC BY-NC 4.0 license terms. Commercial use requires a separate licence from IIT Delhi.

Log in or Sign Up to review the conditions and access this dataset content.

RSL-SHRUTI Sangraha — Cleaned Classical Indian Texts

SHRUTI (श्रुति — That Which Is Heard) Sangraha is a quality-verified multilingual corpus of classical Indian texts, cleaned and prepared for NLP and educational applications.

High-quality, verified classical text data across 6 Indian languages — ready for tokenizer training, language modeling, and IKS research.

Dataset Summary

Property Value
Languages Sanskrit, Hindi, Tamil, Telugu, Kannada, Malayalam
Total Files 6 (one per language, verified versions)
Total Size ~718 MB
Total Lines ~1.19 million
Format Plain text (.txt), UTF-8, one passage per line
Source Public-domain classical texts
License CC BY-NC 4.0
Creator Prof. Santhosh Sivasubramani, INTRINSIC Lab, RSL Foundation, IIT Delhi

Files

File Language Lines Size
sangraha_sanskrit_verified.txt Sanskrit (sa) 76,066 56 MB
sangraha_hindi_verified.txt Hindi (hi) 212,842 127 MB
sangraha_tamil_verified.txt Tamil (ta) 237,990 135 MB
sangraha_telugu_verified.txt Telugu (te) 209,850 133 MB
sangraha_kannada_verified.txt Kannada (kn) 240,901 132 MB
sangraha_malayalam_verified.txt Malayalam (ml) 217,144 136 MB

Note: We ship the "verified" variants. Each file has been quality-checked beyond initial cleaning to ensure accurate script, remove OCR artifacts, and validate passage boundaries.

Processing Pipeline

  1. Source collection — Classical texts gathered from public-domain digital libraries
  2. Cleaning — Script normalization, encoding fixes, whitespace standardization
  3. Verification — Manual spot-check + automated validation for script correctness, duplicate removal, and passage boundary integrity
  4. Output — One line per passage, UTF-8 encoded

How to Use

from datasets import load_dataset

# Load the full corpus
ds = load_dataset("RSL-INTRINSICLab-IIT/RSL-SHRUTI-Sangraha")

# Or load individual language files directly
with open("sangraha_tamil_verified.txt", encoding="utf-8") as f:
    passages = f.readlines()
print(f"Tamil passages: {len(passages)}")

Use Cases

  • Tokenizer training — Used to train RSL-BHARATI-v3 (32K vocab, 7-language IKS tokenizer)
  • Language modeling — Pre-training or fine-tuning on classical Indian language data
  • IKS NLP research — Named entity recognition, topic modeling, or text classification on classical texts
  • Educational — Reference corpus for Indian language pedagogy and curriculum development

Data Provenance

The six text files distributed in this dataset are sourced exclusively from the AI4Bharat Sangraha project's "verified" split (Apache 2.0 license).

File Language Source License
sangraha_sanskrit_verified.txt Sanskrit (sa) AI4Bharat Sangraha — verified Apache 2.0
sangraha_hindi_verified.txt Hindi (hi) AI4Bharat Sangraha — verified Apache 2.0
sangraha_tamil_verified.txt Tamil (ta) AI4Bharat Sangraha — verified Apache 2.0
sangraha_telugu_verified.txt Telugu (te) AI4Bharat Sangraha — verified Apache 2.0
sangraha_kannada_verified.txt Kannada (kn) AI4Bharat Sangraha — verified Apache 2.0
sangraha_malayalam_verified.txt Malayalam (ml) AI4Bharat Sangraha — verified Apache 2.0

The Sangraha verified split contains web-crawled text from human-verified websites, OCR-extracted content from high-quality PDFs, and transcribed material from audio/video — all in classical Indian languages.

Note: Other text sources used elsewhere in the BODHAN AI project (Thirukkural, Bhagavad Gita, Sangam literature, Wikipedia, Gutenberg) are not part of this dataset. They are used in NanoGPT pre-training and are documented in their respective repositories.

Cleaning Pipeline

All raw Sangraha text passes through a standardised cleaning pipeline:

  1. HTML/XML tag removal
  2. Unicode normalisation (NFC) and encoding fixes
  3. Duplicate passage removal (exact-match deduplication)
  4. Short-line filtering (< 20 characters removed)
  5. Script validation — each language file verified to contain only the expected script

The corpus files distributed here contain the "verified" variant — after all cleaning stages.

Related Resources

  • RSL-BHARATI-v3 — Multilingual tokenizer trained on this corpus
  • RSL-SHRUTI-Thirukkural — Thirukkural-CBSE curriculum mapping dataset
  • RSL-PRAJNA-v2 — IKS teaching quality evaluation benchmark

Citation

@dataset{rsl_shruti_sangraha,
  title={RSL-SHRUTI Sangraha: Cleaned Classical Indian Text Corpus in 6 Languages},
  author={Sivasubramani, Santhosh},
  year={2026},
  institution={INTRINSIC Lab, RSL Foundation, IIT Delhi},
  url={https://huggingface.co/datasets/RSL-INTRINSICLab-IIT/RSL-SHRUTI-Sangraha}
}

License

CC BY-NC 4.0 — Free for research and educational use. Commercial use requires a license from IIT Delhi.

Acknowledgment

Demonstrated at the Bharat Bodhan AI Conclave, anchored and driven by the Ministry of Education and IIT Madras, New Delhi.

Contact

Prof. Santhosh Sivasubramani Director, INTRINSIC Laboratory RSL Foundation, Centre for SeNSE, IIT Delhi ssivasub@iitd.ac.in https://intrinsic.iitd.ac.in

Downloads last month
9