Before we dive into evaluation, we need to understand how to efficiently run inference with our models. While we can use the standard transformers.Pipeline approach:
from transformers import pipeline
# Create a pipeline with a specific model
generator = pipeline(
"text-generation",
model="HuggingFaceTB/SmolLM3-3B",
torch_dtype="auto",
device_map="auto"
)
# Generate text
response = generator(
"Write a short poem about coding:",
max_new_tokens=100,
do_sample=True,
temperature=0.7
)
print(response[0]['generated_text'])This approach has significant limitations for evaluation scenarios:
For high-throughput evaluation and production deployment, we need more sophisticated inference engines like vLLM.
vLLM (pronounced “vee-LLM”) is a high-throughput and memory-efficient inference engine for large language models. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project that revolutionizes LLM serving with several key innovations:
When running large-scale evaluations, traditional inference methods become bottlenecks:
vLLM addresses these issues with revolutionary techniques that can achieve up to 24x higher throughput compared to traditional methods.
vLLM integrates deeply with the Hugging Face ecosystem, as detailed in the vLLM Hugging Face integration documentation. Here’s how it works:
When you run vLLM with a Hugging Face model, the following happens:
Model Discovery - vLLM checks for the model’s config.json file
Configuration Loading - Converts the config file into a dictionary and determines the model type
Model Class Initialization - Uses the architectures field to map to the appropriate vLLM model class
Tokenizer Integration - Loads the tokenizer using AutoTokenizer.from_pretrained
Weight Loading - Downloads model weights in safetensors format (recommended) or PyTorch bin format
PagedAttention is vLLM’s key innovation that addresses a critical bottleneck in LLM inference: KV cache memory management. During text generation, models store attention keys and values (KV cache) for each generated token to avoid redundant computations. The KV cache can become enormous, especially with long sequences or multiple concurrent requests.
vLLM’s breakthrough lies in how it manages this cache:
The PagedAttention approach can lead to up to 24x higher throughput compared to traditional methods, making it a game-changer for production LLM deployments.
Traditional batching waits for all sequences in a batch to complete before starting the next batch. vLLM implements continuous batching, which:
This approach is particularly beneficial for evaluation workloads where requests have varying lengths and completion times.
# Install vLLM
pip install vllm
# Or with specific CUDA version
pip install vllm --extra-index-url https://download.pytorch.org/whl/cu121import vllm
print(f"vLLM version: {vllm.__version__}")from vllm import LLM, SamplingParams
# Initialize the model
llm = LLM(
model="HuggingFaceTB/SmolLM3-3B",
trust_remote_code=True,
dtype="float16",
max_model_len=4096,
)
# Create sampling parameters
sampling_params = SamplingParams(
temperature=0.6,
top_p=0.95,
max_tokens=512,
)
# Generate text
prompts = [
"Explain the concept of machine learning in simple terms:",
"Write a Python function to calculate factorial:",
]
outputs = llm.generate(prompts, sampling_params)
# Print results
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt}")
print(f"Generated: {generated_text}")
print("-" * 50)Prompt: Explain the concept of machine learning in simple terms:
Generated: Explain the concept of machine learning in simple terms:
--------------------------------------------------
Prompt: Write a Python function to calculate factorial:
Generated: Write a Python function to calculate factorial:
--------------------------------------------------For instruction-tuned models like SmolLM3, you can use chat templates:
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
# Load model and tokenizer
model_name = "HuggingFaceTB/SmolLM3-3B"
llm = LLM(
model=model_name,
trust_remote_code=True,
dtype="float16",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Prepare chat messages
messages = [
{"role": "user", "content": "Give me a brief explanation of gravity in simple terms."}
]
# Apply chat template
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
# Generate response
sampling_params = SamplingParams(
temperature=0.6,
top_p=0.95,
max_tokens=1024,
)
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)Give me a brief explanation of gravity in simple terms.
Gravity is the force that pulls objects towards each other. It is caused by the mass of the objects. The more mass an object has, the stronger its gravity is.For production deployments, vLLM provides an OpenAI-compatible API server:
# Basic server
vllm serve HuggingFaceTB/SmolLM3-3B \
--dtype float16 \
--trust-remote-code
# Multi-GPU server
vllm serve HuggingFaceTB/SmolLM3-3B \
--tensor-parallel-size 4 \
--dtype float16 \
--trust-remote-code \
--port 8000import openai
# Configure client
client = openai.OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123", # Can be any string for local server
)
# Chat completion
response = client.chat.completions.create(
model="HuggingFaceTB/SmolLM3-3B",
messages=[
{"role": "user", "content": "Explain quantum computing briefly."}
],
temperature=0.6,
max_tokens=512,
)
print(response.choices[0].message.content)Quantum computing is a type of computing that uses quantum bits instead of classical bits. Quantum bits are different from classical bits because they can be in a superposition of 0 and 1, which allows them to be in multiple states at once. This allows quantum computers to perform certain calculations much faster than classical computers.Now that you understand how to use vLLM for efficient inference with Hugging Face models, you’re ready to integrate it into evaluation workflows. In the next section, we’ll explore how vLLM accelerates model evaluation with LightEval, providing significant speedups for benchmark testing and custom evaluation tasks.
The combination of vLLM’s high-throughput inference and LightEval’s comprehensive evaluation framework creates a powerful toolkit for assessing language model performance efficiently and at scale.
< > Update on GitHub