In Unit 1, we explored supervised fine-tuning on LLMs, including efficient strategies using TRL. In this section, we adapt these techniques for Vision Language Models (VLMs), focusing on efficiency and task-specific performance.
When fine-tuning VLMs, memory and computation can quickly become a bottleneck. Here are the main strategies:
Quantization reduces the precision of model weights and activations, lowering memory usage and speeding up computation.
⚠️ Especially relevant for VLMs, where image features increase memory demands.
Low-Rank Adaptation (LoRA) freezes the base model weights and trains compact rank-decomposition matrices, drastically reducing the number of trainable parameters. When combined with PEFT, fine-tuning requires millions of trainable parameters instead of billions, making large VLMs accessible on limited hardware.
Memory-efficient training can be achieved with:
SFT adapts a pre-trained VLM to a specific task using labeled datasets (image-text pairs). Examples include:
Limitations
The SFTTrainer supports training VLMs directly.
Your dataset should include an additional images column containing the visual inputs. See the dataset format docs for details.
from trl import SFTTrainer
training_args = SFTTrainer(
output_dir="./fine_tuned_model",
per_device_train_batch_size=4,
num_train_epochs=3,
learning_rate=5e-5,
save_steps=1000,
bf16=True,
gradient_checkpointing=True,
gradient_accumulation_steps=16,
logging_steps=50
)⚠️ Important: Set max_length=None in the SFTConfig.
Otherwise, truncation may remove image tokens during training.
SFTConfig(max_length=None, ...)Data Preparation
HuggingFaceM4/ChartQA.Model Setup
HuggingFaceTB/SmolVLM2-2.2B-Instruct.Fine-Tuning Process
system, user, assistant).Direct Preference Optimization (DPO) aligns a VLM with human preferences instead of strict instruction following.
Limitations
Question: How many families? Rejected: The image does not provide information about families. Chosen: The image shows a Union Organization table setup with 18,000 families.
| Feature | SFT | DPO |
|---|---|---|
| Input | Labeled image-text | Image-text + preference-ranked outputs |
| Loss | Standard supervised | Preference-based |
| Goal | Task-specific adaptation | Human-aligned output |
| Use Case | Domain specialization | Creative, subjective, or multi-choice tasks |
After fine-tuning, evaluate your VLM’s performance on multimodal tasks using benchmarks and custom test sets, applying techniques from Unit 2.
As introduced in earlier units, Hugging Face Jobs make fine-tuning Vision Language Models (VLMs) straightforward. You can run Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO) directly on the Hugging Face infrastructure with minimal setup, adjusting the training parameters we discussed previously.
hf jobs uv run \
--flavor a100-large \
--secrets HF_TOKEN \
--timeout 2h \
"https://raw.githubusercontent.com/huggingface/trl/main/trl/scripts/sft.py" \
--model_name_or_path HuggingFaceTB/SmolVLM2-2.2B-Instruct \
--dataset_name HuggingFaceM4/ChartQA \
--report_to trackio--flavor a100-large: GPU type for training.--secrets HF_TOKEN: Your Hugging Face token.The script handles processor setup, data formatting, and model training automatically. Once the job finishes, your fine-tuned VLM is ready to download and use in downstream tasks.
For memory-efficient fine-tuning of large VLMs, consider combining techniques like LoRA adapters, gradient accumulation, and quantization. These strategies help reduce memory usage while maintaining performance.