
Welcome to the comprehensive (and smollest) course to Fine-Tuning Language Models!
This free course will take you on a journey, from beginner to expert, in understanding, implementing, and optimizing fine-tuning techniques for large language models.
This first unit will help you onboard:
Letās get started!
This course is smol but fast! Itās for software developers and engineers looking to fast track their LLM fine-tuning skills. If thatās not you, check out the LLM Course.
In this course, you will:
š Study instruction tuning, supervised fine-tuning, preference alignment, evaluation, vision language models⦠and more!
At the end of this course, youāll understand how to fine-tune language models effectively and build specialized AI applications using the latest fine-tuning techniques.
Donāt forget to sign up to the course!
The course is composed of:
This course is a living project, evolving with your feedback and contributions! Feel free to open issues and PRs in GitHub, and engage in discussions in our Discord server.
Here is the general syllabus for the course. A more detailed list of topics will be released with each unit.
| # | Topic | Description | Released |
|---|---|---|---|
| 1 | Instruction Tuning | Supervised fine-tuning, chat templates, instruction following | ā |
| 2 | Evaluation | Benchmarks and custom domain evaluation | September |
| 3 | Preference Alignment | Aligning models to human preferences with algorithms like DPO. | October |
| 4 | Reinforcement Learning | Optimizing models with based on reinforcement policies. | October |
| 5 | Vision Language Models | Adapt and use multimodal models | November |
| 6 | Synthetic Data | Generate synthetic datasets for custom domains | November |
| 7 | Award Ceremony | Showcase projects and celebrate | December |
To be able to follow this course, you should have:
If you donāt have any of these, donāt worry. Check out the LLM Course to get started.
The above courses are not prerequisites in themselves, so if you understand the concepts of LLMs and transformers, you can start the course now!
You only need 2 things:
You can choose to follow this course in audit mode, or do the activities and get one of the two certificates weāll issue. If you audit the course, you can participate in all the challenges and do assignments if you want, and you donāt need to notify us.
The certification process is completely free:
Each chapter in this course is designed to be completed in 1 week, with approximately 3-4 hours of work per week.
Since thereās a deadline, we provide you a recommended pace:

To get the most out of the course, we have some advice:

About the authors:
Ben is a Machine Learning Engineer at Hugging Face who focuses on building LLM applications, with post training and agentic approaches. Follow Ben on the Hub to see his latest projects.
We would like to extend our gratitude to the following individuals and partners for their invaluable contributions and support:
Contributions are welcome š¤
Please ask your question in our discord server #fine-tuning-course-questions.
Now that you have all the information, letās get on board āµ
< > Update on GitHub