Welcome to Chapter 2: AI Inference on Hugging Face. In this part of the course, you’ll learn two production-focused ways to run models:
By the end of this chapter, you’ll be able to pick the right option, call models from code, and ship a small app end to end.
pip install huggingface_hubGenerate a token from your settings and set it as an environment variable before running examples:
macOS/Linux:
export HF_TOKEN=hf_xxxWindows PowerShell:
$env:HF_TOKEN = "hf_xxx"We’ll start with Inference Providers, then move to Inference Endpoints.