metadata
configs:
- config_name: default
data_files:
- split: hfopenllm_v2
path: data/hfopenllm_v2.parquet
- split: livecodebenchpro
path: data/livecodebenchpro.parquet
Every Eval Ever Dataset
Evaluation results from various AI model leaderboards.
Usage
from datasets import load_dataset
# Load specific leaderboard
dataset = load_dataset("deepmage121/eee_test", split="hfopenllm_v2")
# Load all
dataset = load_dataset("deepmage121/eee_test")
Available Leaderboards (Splits)
hfopenllm_v2livecodebenchpro
Schema
model_name,model_id,model_developer: Model informationevaluation_source_name: Leaderboard nameevaluation_results: JSON string with all metrics- Additional metadata for reproducibility
Auto-updated via GitHub Actions.