bible-en-th / README.md
Tsunnami's picture
Update README.md
0978797 verified
metadata
dataset_info:
  features:
    - name: en
      dtype: string
    - name: th
      dtype: string
  splits:
    - name: train
      num_bytes: 15352952
      num_examples: 31102
  download_size: 6074585
  dataset_size: 15352952
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: unlicense
task_categories:
  - translation
  - text-generation
  - text-classification
language:
  - en
  - th
tags:
  - bilingual
  - parallel-corpus
  - machine-translation
  - text-generation
  - text-classification
  - religious-text
  - English
  - Thai
  - Bible
  - King-James-Version
  - nlp
  - sequence-to-sequence
  - translation-en-th
pretty_name: Bible Thai English Translation
size_categories:
  - 10K<n<100K

bible-en-th

Overview

The bible-en-th dataset is a bilingual corpus containing English and Thai translations of the Bible, specifically the King James Version (KJV) translated into Thai. This dataset is designed for various natural language processing tasks, including translation, language modeling, and text analysis.

  • Languages: English (en) and Thai (th)
  • Total Rows: 31,102

Dataset Structure

The dataset consists of two main features:

  • en: English text from the Bible.
  • th: Corresponding Thai text from the King James Version.

Sample Data

The dataset includes the following sections of the Bible:

Click to show/hide Bible sections
  • Genesis
  • Exodus
  • Leviticus
  • Numbers
  • Deuteronomy
  • Joshua
  • Judges
  • Ruth
  • 1 Samuel
  • 2 Samuel
  • 1 Kings
  • 2 Kings
  • 1 Chronicles
  • 2 Chronicles
  • Ezra
  • Nehemiah
  • Esther
  • Job
  • Psalms
  • Proverbs
  • Ecclesiastes
  • Song of Solomon
  • Isaiah
  • Jeremiah
  • Lamentations
  • Ezekiel
  • Daniel
  • Hosea
  • Joel
  • Amos
  • Obadiah
  • Jonah
  • Micah
  • Nahum
  • Habakkuk
  • Zephaniah
  • Haggai
  • Zechariah
  • Malachi

Installation

To access this dataset, you can load it using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("hf_datasets/bible-en-th")

Streaming the Dataset

For efficient memory usage, especially when working with large datasets, you can stream the dataset:

from datasets import load_dataset

streamed_dataset = load_dataset("hf_datasets/bible-en-th", streaming=True)

for example in streamed_dataset["train"]:
    print(example["en"])

Streaming is particularly useful when you want to process the dataset sequentially without loading it entirely into memory.

License

This dataset is released as unlicensed.