Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks

GitHub License Jan App

image/gif

Overview

Jan-v2-VL-max-Intruct extends the Jan-v2-VL family to a 30B-parameter vision–language model focused on research capability.

Local Deployment

Jan Web

Hosted on Jan Web β€” use the model directly at chat.jan.ai

Local Deployment

Using vLLM: We recommend vLLM for serving and inference. All reported results were run with vLLM 0.12.0. For FP8 deployment, we used llm-compressor built from source. Please pin transformers==4.57.1 for compatibility.

# Exact versions used in our evals
pip install vllm==0.12.0
pip install transformers==4.57.1
pip install "git+https://github.com/vllm-project/llm-compressor.git@1abfd9eb34a2941e82f47cbd595f1aab90280c80"
vllm serve Menlo/Jan-v2-VL-max-Instruct-FP8 \
    --host 0.0.0.0 \
    --port 1234 \
    -dp 1 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes 
    

Recommended Parameters

For optimal performance in agentic and general tasks, we recommend the following inference parameters:

temperature: 0.7
top_p: 0.8
top_k: 20
repetition_penalty: 1.0
presence_penalty: 0.0

🀝 Community & Support

πŸ“„ Citation

Updated Soon
Downloads last month
36
Safetensors
Model size
31B params
Tensor type
BF16
Β·
F8_E4M3
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for janhq/Jan-v2-VL-max-Instruct-FP8

Quantized
(44)
this model
Quantizations
1 model

Collection including janhq/Jan-v2-VL-max-Instruct-FP8