Introduction

Welcome to Chapter 2: AI Inference on Hugging Face. In this part of the course, you’ll learn two production-focused ways to run models:

By the end of this chapter, you’ll be able to pick the right option, call models from code, and ship a small app end to end.

What you’ll learn

Prerequisites

Generate a token from your settings and set it as an environment variable before running examples:

macOS/Linux:

export HF_TOKEN=hf_xxx

Windows PowerShell:

$env:HF_TOKEN = "hf_xxx"

Providers vs Endpoints: when to use which?

We’ll start with Inference Providers, then move to Inference Endpoints.

  1. We’ll learn about Inference Providers to get instant results with minimal setup. You’ll build a small app using Providers.
  2. Then we’ll learn Inference Endpoints for dedicated, production-ready deployments. You’ll apply what you’ve learned to build a small app using an Endpoint.
  3. Finally, you’ll review and take a short quiz.
< > Update on GitHub