Inference Providers are a simple, hosted way to run models via API without deploying infrastructure. You can route requests automatically to an available provider or choose a specific provider for a model.
In this unit you’ll learn the basics, make a first call, and then build a small app. Keep an eye on the Tips — they highlight common gotchas and best practices.
from huggingface_hub import InferenceClient
# Create a client. By default, it will auto-pick a provider.
client = InferenceClient()
# Text generation
response = client.text_generation(
"Write a short poem about the ocean.",
model="gpt2", # replace with a model you want to try
)
print(response)
# Image generation (example)
# img_bytes = client.image_generation(
# prompt="a watercolor painting of a lighthouse at dawn",
# model="stabilityai/stable-diffusion-2",
# )
# with open("output.png", "wb") as f:
# f.write(img_bytes)To prefer a specific provider, pass provider="..." to your call (or when constructing the client). When in doubt, keep provider="auto" to use the first available provider based on your preferences.
InferenceClient reads your token from the HF_TOKEN environment variable or you can pass token explicitly:
from huggingface_hub import InferenceClient
client = InferenceClient(token="hf_xxx")
print(client.text_generation("Hello!", model="gpt2"))You can also call the Inference API directly with cURL:
curl -X POST \
-H "Authorization: Bearer $HF_TOKEN" \
-H "Content-Type: application/json" \
-d '{"inputs": "Write a haiku about the sea"}' \
https://api-inference.huggingface.co/models/gpt2Before coding, pick a model on the Hub with Inference Providers enabled and try the right-side widget. Then replicate the request in code.
Python:
import os
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="auto",
api_key=os.environ["HF_TOKEN"],
)
# Returns a PIL.Image
image = client.text_to_image(
"Astronaut riding a horse",
model="black-forest-labs/FLUX.1-schnell",
)
image.save("image.png")TypeScript:
import { InferenceClient } from "@huggingface/inference";
const client = new InferenceClient(process.env.HF_TOKEN);
const image = await client.textToImage({
provider: "auto",
model: "black-forest-labs/FLUX.1-schnell",
inputs: "Astronaut riding a horse",
parameters: { num_inference_steps: 5 },
});
// image is a BlobFor long generations, stream tokens as they arrive:
from huggingface_hub import InferenceClient
client = InferenceClient()
for chunk in client.text_generation(
"Explain transformers in 2 sentences.",
model="gpt2",
stream=True,
):
print(chunk, end="", flush=True)provider="auto" is the default and tries providers in your preference order with failover. You can also pin a specific provider.
Python:
from huggingface_hub import InferenceClient
client = InferenceClient()
# Auto selection (default)
img_auto = client.text_to_image(
"Astronaut riding a horse",
model="black-forest-labs/FLUX.1-schnell",
provider="auto",
)
# Explicit provider
img_fal = client.text_to_image(
"Astronaut riding a horse",
model="black-forest-labs/FLUX.1-schnell",
provider="fal-ai",
)TypeScript:
import { InferenceClient } from "@huggingface/inference";
const client = new InferenceClient(process.env.HF_TOKEN);
const imageAuto = await client.textToImage({
model: "black-forest-labs/FLUX.1-schnell",
inputs: "Astronaut riding a horse",
provider: "auto",
parameters: { num_inference_steps: 5 },
});
const imageFal = await client.textToImage({
model: "black-forest-labs/FLUX.1-schnell",
inputs: "Astronaut riding a horse",
provider: "fal-ai",
parameters: { num_inference_steps: 5 },
});InferenceClient(timeout=60)).You’re ready to build an app with providers—let’s do that next.
Up next: build your first AI app using Providers (transcribe + summarize).
< > Update on GitHub