vLLM Inference with Hugging Face Models

Before we dive into evaluation, we need to understand how to efficiently run inference with our models. While we can use the standard transformers.Pipeline approach:

from transformers import pipeline

# Create a pipeline with a specific model
generator = pipeline(
    "text-generation",
    model="HuggingFaceTB/SmolLM3-3B",
    dtype="auto",
    device_map="auto"
)

# Generate text
response = generator(
    "Write a short poem about coding:",
    max_new_tokens=100,
    do_sample=True,
    temperature=0.7
)
print(response[0]['generated_text'])

This approach has significant limitations for evaluation scenarios:

For high-throughput evaluation and production deployment, we need more sophisticated inference engines like vLLM.

What is vLLM?

vLLM (pronounced “vee-LLM”) is a high-throughput and memory-efficient inference engine for large language models. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project that revolutionizes LLM serving with several key innovations:

Key Features

Why vLLM Matters for Evaluation

When running large-scale evaluations, traditional inference methods become bottlenecks:

vLLM addresses these issues with revolutionary techniques that can achieve up to 24x higher throughput compared to traditional methods.

vLLM and Hugging Face Integration

vLLM integrates deeply with the Hugging Face ecosystem, as detailed in the vLLM Hugging Face integration documentation. Here’s how it works:

Model Loading Process

When you run vLLM with a Hugging Face model, the following happens:

  1. Model Discovery - vLLM checks for the model’s config.json file

  2. Configuration Loading - Converts the config file into a dictionary and determines the model type

  3. Model Class Initialization - Uses the architectures field to map to the appropriate vLLM model class

  4. Tokenizer Integration - Loads the tokenizer using AutoTokenizer.from_pretrained

  5. Weight Loading - Downloads model weights in safetensors format (recommended) or PyTorch bin format

Core Technologies Behind vLLM

PagedAttention: Revolutionary Memory Management

PagedAttention is vLLM’s key innovation that addresses a critical bottleneck in LLM inference: KV cache memory management. During text generation, models store attention keys and values (KV cache) for each generated token to avoid redundant computations. The KV cache can become enormous, especially with long sequences or multiple concurrent requests.

vLLM’s breakthrough lies in how it manages this cache:

  1. Memory Paging - Instead of treating the KV cache as one large block, it’s divided into fixed-size “pages” (similar to virtual memory in operating systems).
  2. Non-contiguous Storage - Pages don’t need to be stored contiguously in GPU memory, allowing for more flexible memory allocation.
  3. Page Table Management - A page table tracks which pages belong to which sequence, enabling efficient lookup and access.
  4. Memory Sharing - For operations like parallel sampling, pages storing the KV cache for the prompt can be shared across multiple sequences.

The PagedAttention approach can lead to up to 24x higher throughput compared to traditional methods, making it a game-changer for production LLM deployments.

Continuous Batching for Optimal GPU Utilization

Traditional batching waits for all sequences in a batch to complete before starting the next batch. vLLM implements continuous batching, which:

This approach is particularly beneficial for evaluation workloads where requests have varying lengths and completion times.

Installation and Setup

Basic Installation

# Install vLLM
pip install vllm

# Or with specific CUDA version
pip install vllm --extra-index-url https://download.pytorch.org/whl/cu121

Verify Installation

import vllm
print(f"vLLM version: {vllm.__version__}")

Basic vLLM Usage with Hugging Face Models

Simple Text Generation

from vllm import LLM, SamplingParams

# Initialize the model
llm = LLM(
    model="HuggingFaceTB/SmolLM3-3B",
    trust_remote_code=True,
    dtype="float16",
    max_model_len=4096,
)

# Create sampling parameters
sampling_params = SamplingParams(
    temperature=0.6,
    top_p=0.95,
    max_tokens=512,
)

# Generate text
prompts = [
    "Explain the concept of machine learning in simple terms:",
    "Write a Python function to calculate factorial:",
]

outputs = llm.generate(prompts, sampling_params)

# Print results
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt}")
    print(f"Generated: {generated_text}")
    print("-" * 50)
Output
Prompt: Explain the concept of machine learning in simple terms:
Generated: Explain the concept of machine learning in simple terms:
--------------------------------------------------
Prompt: Write a Python function to calculate factorial:
Generated: Write a Python function to calculate factorial:
--------------------------------------------------

Chat Template Support

For instruction-tuned models like SmolLM3, you can use chat templates:

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

# Load model and tokenizer
model_name = "HuggingFaceTB/SmolLM3-3B"
llm = LLM(
    model=model_name,
    trust_remote_code=True,
    dtype="float16",
)

tokenizer = AutoTokenizer.from_pretrained(model_name)

# Prepare chat messages
messages = [
    {"role": "user", "content": "Give me a brief explanation of gravity in simple terms."}
]

# Apply chat template
prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

# Generate response
sampling_params = SamplingParams(
    temperature=0.6,
    top_p=0.95,
    max_tokens=1024,
)

outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)
Output
Give me a brief explanation of gravity in simple terms.
Gravity is the force that pulls objects towards each other. It is caused by the mass of the objects. The more mass an object has, the stronger its gravity is.

vLLM API Server

For production deployments, vLLM provides an OpenAI-compatible API server:

Starting the Server

# Basic server
vllm serve HuggingFaceTB/SmolLM3-3B \
    --dtype float16 \
    --trust-remote-code

# Multi-GPU server
vllm serve HuggingFaceTB/SmolLM3-3B \
    --tensor-parallel-size 4 \
    --dtype float16 \
    --trust-remote-code \
    --port 8000

Using the API

import openai

# Configure client
client = openai.OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",  # Can be any string for local server
)

# Chat completion
response = client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[
        {"role": "user", "content": "Explain quantum computing briefly."}
    ],
    temperature=0.6,
    max_tokens=512,
)

print(response.choices[0].message.content)
Output
Quantum computing is a type of computing that uses quantum bits instead of classical bits. Quantum bits are different from classical bits because they can be in a superposition of 0 and 1, which allows them to be in multiple states at once. This allows quantum computers to perform certain calculations much faster than classical computers.

Next Steps

Now that you understand how to use vLLM for efficient inference with Hugging Face models, you’re ready to integrate it into evaluation workflows. In the next section, we’ll explore how vLLM accelerates model evaluation with LightEval, providing significant speedups for benchmark testing and custom evaluation tasks.

The combination of vLLM’s high-throughput inference and LightEval’s comprehensive evaluation framework creates a powerful toolkit for assessing language model performance efficiently and at scale.

< > Update on GitHub