CoPeP
Collection
Continual Pretraining for Protein Language Models • 2 items • Updated
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This dataset is organized for continual-learning experiments on protein sequences.
chandar-lab/CoPePtrain/: 252 parquet shards (data-00000-of-00252.parquet ... data-00251-of-00252.parquet)splits/: 10 task index parquet files (task_0.parquet ... task_9.parquet)val/: validation parquet (val.parquet)task_0.parquet -> task_0 (2015)task_1.parquet -> task_1 (2016)task_2.parquet -> task_2 (2017)task_3.parquet -> task_3 (2018)task_4.parquet -> task_4 (2019)task_5.parquet -> task_5 (2020)task_6.parquet -> task_6 (2021)task_7.parquet -> task_7 (2022)task_8.parquet -> task_8 (2023)task_9.parquet -> task_9 (2024)splits/task_*.parquet
The splits/task_*.parquet files are index-style split definitions keyed by row_idx.
They are intended to be joined with records from train/ (or other source files)
using row_idx, rather than treated as standalone full-example datasets.
Task index files are now exposed through name="task_splits".
The old load_dataset(repo_id, split="task_0") pattern is deprecated.
from datasets import load_dataset
repo_id = "__HF_DATASET_REPO__"
# 1) Load train split directly (map-style dataset)
train_ds = load_dataset(repo_id, split="train")
# 2) Load one task index split via the task_splits config.
# Use streaming=True to avoid Arrow cache materialization for index files.
task0_idx = load_dataset(
repo_id,
name="task_splits",
split="task_0",
streaming=True,
)
# 3) Materialize examples by selecting train rows using row_idx
task0_rows = [example["row_idx"] for example in task0_idx]
task0_examples = train_ds.select(task0_rows)
print(task0_examples)
print(task0_examples[0])