|
|
--- |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: hfopenllm_v2 |
|
|
path: data/hfopenllm_v2.parquet |
|
|
- split: livecodebenchpro |
|
|
path: data/livecodebenchpro.parquet |
|
|
--- |
|
|
|
|
|
# Every Eval Ever Dataset |
|
|
|
|
|
Evaluation results from various AI model leaderboards. |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load specific leaderboard |
|
|
dataset = load_dataset("deepmage121/eee_test", split="hfopenllm_v2") |
|
|
|
|
|
# Load all |
|
|
dataset = load_dataset("deepmage121/eee_test") |
|
|
``` |
|
|
|
|
|
## Available Leaderboards (Splits) |
|
|
|
|
|
- `hfopenllm_v2` |
|
|
- `livecodebenchpro` |
|
|
|
|
|
## Schema |
|
|
|
|
|
- `model_name`, `model_id`, `model_developer`: Model information |
|
|
- `evaluation_source_name`: Leaderboard name |
|
|
- `evaluation_results`: JSON string with all metrics |
|
|
- Additional metadata for reproducibility |
|
|
|
|
|
Auto-updated via GitHub Actions. |
|
|
|