Fine-Tuning VLMs

In Unit 1, we explored supervised fine-tuning on LLMs, including efficient strategies using TRL. In this section, we adapt these techniques for Vision Language Models (VLMs), focusing on efficiency and task-specific performance.

Key Efficiency Techniques

When fine-tuning VLMs, memory and computation can quickly become a bottleneck. Here are the main strategies:

Quantization

Quantization reduces the precision of model weights and activations, lowering memory usage and speeding up computation.

PEFT & LoRA

Low-Rank Adaptation (LoRA) freezes the base model weights and trains compact rank-decomposition matrices, drastically reducing the number of trainable parameters. When combined with PEFT, fine-tuning requires millions of trainable parameters instead of billions, making large VLMs accessible on limited hardware.

Batch Size Optimization

Memory-efficient training can be achieved with:

Supervised Fine-Tuning (SFT)

SFT adapts a pre-trained VLM to a specific task using labeled datasets (image-text pairs). Examples include:

When to Use SFT

Limitations

Usage Example

The SFTTrainer supports training VLMs directly.
Your dataset should include an additional images column containing the visual inputs. See the dataset format docs for details.

from trl import SFTTrainer

training_args = SFTTrainer(
   output_dir="./fine_tuned_model",
   per_device_train_batch_size=4,
   num_train_epochs=3,
   learning_rate=5e-5,
   save_steps=1000,
   bf16=True,
   gradient_checkpointing=True,
   gradient_accumulation_steps=16,
   logging_steps=50
)

⚠️ Important: Set max_length=None in the SFTConfig.
Otherwise, truncation may remove image tokens during training.

SFTConfig(max_length=None, ...)

Practical Steps

  1. Data Preparation

  2. Model Setup

  3. Fine-Tuning Process

Preference Optimization (DPO)

Direct Preference Optimization (DPO) aligns a VLM with human preferences instead of strict instruction following.

Limitations

Usage Example

Question: How many families?
Rejected: The image does not provide information about families.
Chosen: The image shows a Union Organization table setup with 18,000 families.

SFT vs DPO Comparison

Feature SFT DPO
Input Labeled image-text Image-text + preference-ranked outputs
Loss Standard supervised Preference-based
Goal Task-specific adaptation Human-aligned output
Use Case Domain specialization Creative, subjective, or multi-choice tasks

Practical Tips

Next Steps

After fine-tuning, evaluate your VLM’s performance on multimodal tasks using benchmarks and custom test sets, applying techniques from Unit 2.

Fine-Tuning a VLM in hf jobs using TRL

As introduced in earlier units, Hugging Face Jobs make fine-tuning Vision Language Models (VLMs) straightforward. You can run Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO) directly on the Hugging Face infrastructure with minimal setup, adjusting the training parameters we discussed previously.

Quick Example

hf jobs uv run \
   --flavor a100-large \
   --secrets HF_TOKEN \
   --timeout 2h \
   "https://raw.githubusercontent.com/huggingface/trl/main/trl/scripts/sft.py" \
   --model_name_or_path HuggingFaceTB/SmolVLM2-2.2B-Instruct \
   --dataset_name HuggingFaceM4/ChartQA \
   --report_to trackio

The script handles processor setup, data formatting, and model training automatically. Once the job finishes, your fine-tuned VLM is ready to download and use in downstream tasks.

For memory-efficient fine-tuning of large VLMs, consider combining techniques like LoRA adapters, gradient accumulation, and quantization. These strategies help reduce memory usage while maintaining performance.

Resources

< > Update on GitHub