Submit your evaluation results!

It’s time to evaluate your model and submit it to the leaderboard! This unit will use the same leaderboard-based submission format as Unit 1. Here’s the plan:

  1. Read the written guide for the unit ✅
  2. Choose a model to evaluate (your trained model from Unit 1 or any other model)
  3. Evaluate the model using hf jobs on 5 benchmarks
  4. Open a pull request on the leaderboard

On this page we will go through each step.

1. Read the written guide for the unit and 2. Choose a model to evaluate

For Unit 2’s submission, you should read all the materials in the unit and choose a model to evaluate. You can use a model you trained in Unit 1, or select any other model that interests you. While we won’t ask you to submit scores from custom tasks for this assignment, you can re-use those skills to evaluate models in the future.

The evaluation will test the model’s performance on standard benchmarks to assess its general capabilities.

3. Evaluate the model using hf jobs on 5 benchmarks

Now, we will evaluate your chosen model using hf jobs combined with lighteval and vLLM. We will run evaluation on 5 different benchmarks and push the results to the Hugging Face Hub.

hf jobs uv run \
    --flavor a10g-large \
    --with "lighteval[vllm]" \
    --secrets HF_TOKEN \
    lighteval vllm "model_name=<your-username>/<your-model-name>" "leaderboard|mmlu:abstract_algebra|5|1,leaderboard|mmlu:college_biology|5|1,leaderboard|mmlu:college_computer_science|5|1,leaderboard|truthfulqa:mc|0|0,leaderboard|gsm8k|5|1" --push-to-hub --results-org <your-username>

This command will:

Make sure to replace <your-username> with your actual Hugging Face username and <your-model-name> with the name of the model you want to evaluate.

4. Open a pull request on the leaderboard

You are now ready to submit your model evaluation to the leaderboard! You need to do two things:

  1. Add your model’s results to submissions.json
  2. Share your evaluation command (using hf jobs) in the PR text

Add your model’s results to submissions.json

Open a pull request on the leaderboard space to submit your model evaluation. You just need to add your model info and reference to the dataset you created in the previous step.

{
    "submissions": [

        ... // existing submissions
        
        {
            "username": "<your-username>",
            "model_name": "<your-model-name>",
            "chapter": "2",
            "submission_date": "<your-submission-date>",
            "results-dataset": "<your-results-dataset>"
        }
    ]
}

Share your evaluation command in the PR text

Within the PR text, share your evaluation command. For example:

hf jobs uv run \
    --flavor a10g-large \
    --with "lighteval[vllm]" \
    --secrets HF_TOKEN \
    lighteval vllm "model_name=myusername/my-model" "leaderboard|gsm8k|5|1" --push-to-hub --results-org myusername

This will help us reproduce your model evaluation before we add it to the leaderboard.

Wait for the PR to be merged

Once the PR is merged, your model will be added to the leaderboard! You can check the leaderboard here.

What You’ve Learned

By completing this unit, you’ve learned how to:

Additional Resources

< > Update on GitHub