Qwen-Image-2512 โ€” Quanto INT8 (Transformer-only)

This repository contains only the INT8-quantized transformer/ for Qwen/Qwen-Image-2512.

Use it to:

  • inject into the base pipeline, or
  • compose your own full model directory.

Install

pip install -U diffusers transformers accelerate safetensors optimum-quanto

Usage (Inject into base pipeline)

import torch
from diffusers import DiffusionPipeline, QwenImageTransformer2DModel

base_id = "Qwen/Qwen-Image-2512"
qtrans_id = "ixim/Qwen-Image-2512-Quanto-INT8-transformer"

q_transformer = QwenImageTransformer2DModel.from_pretrained(
    qtrans_id,
    torch_dtype=torch.bfloat16,
)

pipe = DiffusionPipeline.from_pretrained(
    base_id,
    transformer=q_transformer,
    torch_dtype=torch.bfloat16,
)

pipe.enable_attention_slicing()
pipe.vae.enable_tiling()
pipe.vae.enable_slicing()

try:
    pipe.enable_model_cpu_offload()
except Exception:
    pipe.to("cuda")

img = pipe("a clean product poster", height=512, width=512, num_inference_steps=10).images[0]
img.save("out.png")

Acknowledgements

  • Base model: Qwen/Qwen-Image-2512 (Apache-2.0)
  • Quantization: Diffusers QuantoConfig / Optimum-Quanto
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ixim/Qwen-Image-2512-Quanto-INT8-transformer

Quantized
(10)
this model