While standard benchmarks are useful, you might need custom evaluation tasks for your specific domain or use case. lighteval makes it easy to create and run custom evaluation tasks.
Custom tasks are for applications where you need to test capabilities that standard benchmarks don’t cover:
In reality, this covers most applications and so you should expect to create custom tasks if you’re using an LLM in production.
Following the official LightEval documentation, here’s how to create a simple custom task:
Create a Python file in your project directory (e.g., my_custom_task.py):
from lighteval.tasks.lighteval_task import LightevalTaskConfig
from lighteval.tasks.requests import Doc
from lighteval.metrics.metrics_sample import SampleLevelMetric
from lighteval.metrics.utils import MetricCategory, MetricUseCase
import numpy as np
def prompt_fn(line, task_name: str = None):
"""Convert dataset line to evaluation document"""
return Doc(
task_name=task_name,
query=line["question"],
choices=[f" {c}" for c in line["choices"]],
gold_index=line["gold"],
instruction="",
)
# Create your custom task
custom_task = LightevalTaskConfig(
name="my_custom_task",
prompt_function=prompt_fn,
suite=["community"],
hf_repo="your-username/your-dataset",
hf_subset="default",
hf_avail_splits=["test"],
evaluation_splits=["test"],
few_shots_split=None,
few_shots_select=None,
metric=["exact_match"],
generation_size=-1, # Use -1 for multiple choice tasks
)
# Store your task
TASKS_TABLE = [custom_task]Let’s break down the code:
prompt_fn: This is a function that takes a line of data and returns a Doc object.custom_task: This is a LightevalTaskConfig object that defines the task.TASKS_TABLE: This is a list of LightevalTaskConfig objects that defines the tasks.Create a simple JSON dataset file (my_dataset.jsonl):
{"question": "What is machine learning?", "choices": ["A", "B", "C", "D"], "gold": 0}
{"question": "How does a neural network work?", "choices": ["A", "B", "C", "D"], "gold": 1}
{"question": "What is supervised learning?", "choices": ["A", "B", "C", "D"], "gold": 2}You can upload your dataset to the Hugging Face Hub and reference it in your task configuration.
You might also want to consider using visual tools to create this evaluation dataset. For example, AI Sheets is a tool that allows you to create and expand datasets using AI.
Use the CLI to run your custom task with vllm:
# Run your custom task
lighteval vllm \
"model_name=HuggingFaceTB/SmolLM3-3B,dtype=bfloat16" \
"community|my_custom_task|0|0" \
--custom-tasks my_custom_task.pyJust like the previous exercises, you can use HF Jobs to run your custom task.
hf jobs uv run \
--flavor a10g-large \
--with "lighteval[vllm]" \
--secrets HF_TOKEN \
lighteval vllm "model_name=your-model" "community|my_custom_task|0|0" --custom-tasks my_custom_task.pylighteval comes with built-in metrics for most use cases. However, you can also create custom metrics. For example, you can create a custom metric that returns multiple values per sample:
If you need custom evaluation metrics, you can add them to your task file:
from lighteval.metrics.metrics_sample import SampleLevelMetric
from lighteval.metrics.utils import MetricCategory, MetricUseCase
import numpy as np
def my_custom_metric(predictions, formatted_doc, **kwargs):
"""Simple custom metric example"""
response = predictions[0].strip()
expected = formatted_doc.choices[formatted_doc.gold_index].strip()
# Your custom logic here
return float(response.lower() == expected.lower())
# Register the metric
custom_metric = SampleLevelMetric(
metric_name="my_custom_metric",
higher_is_better=True,
category=MetricCategory.CUSTOM,
use_case=MetricUseCase.SCORING,
sample_level_fn=my_custom_metric,
corpus_level_fn=np.mean
)Then use it in your task configuration:
custom_task = LightevalTaskConfig(
name="my_custom_task",
# ... other parameters ...
metric=[custom_metric], # Use your custom metric
)Let’s break down the code:
my_custom_metric: This is a function that takes a list of predictions and a formatted document and returns a dictionary of metrics.custom_metric: This is a SampleLevelMetric object that defines the metric.For easier sharing and reproducibility:
# Upload your dataset
huggingface-cli upload your-username/my-custom-dataset ./my_dataset.jsonlThen reference it in your task:
custom_task = LightevalTaskConfig(
name="my_custom_task",
hf_repo="your-username/my-custom-dataset",
# ... other parameters
)Here are some best practices to keep things simple:
Begin with a small dataset and basic metrics:
exact_match initiallyYou now know how to:
For more advanced features and examples, check out the official LightEval documentation and the community tasks directory.
< > Update on GitHub