d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation πŸš€

This repository contains d3LLM-LLaDA, an ultra-fast diffusion language model presented in the paper d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation.

Model Description

d3LLM-LLaDA is an ultra-fast diffusion language model that strikes a balance between accuracy and parallelism. It uses pseudo-trajectory distillation to teach the model which tokens can be decoded confidently at early steps, and employs an entropy-based multi-block decoding mechanism with KV-cache refresh during inference.

Key Features

  • πŸš€ High throughput: 5.0Γ— faster than autoregressive models (Qwen-2.5-7B-it) on H100 GPU and 3.5Γ— faster on A100 GPU.
  • πŸ“Š High AUP: Achieves high Accuracy Under Parallelism scores across benchmarks.
  • πŸ”§ Task Optimization: Specifically optimized for coding and math reasoning tasks.

Installation

To use this model, it is recommended to clone the official repository and install the required dependencies:

# Clone the repository
git clone https://github.com/hao-ai-lab/d3LLM.git
cd d3LLM

# Install dependencies
pip install -r requirements.txt

Citation

If you find d3LLM useful for your research, please cite the following work:

@article{arxiv'26:d3llm,
  title   = {d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation},
  author  = {Yu-Yang Qian and Junda Su and Lanxiang Hu and Peiyuan Zhang and Zhijie Deng and Peng Zhao and Hao Zhang},
  journal = {ArXiv preprint},
  volume  = {arXiv:2601.07568},
  year    = {2026}
}
Downloads last month
61
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for d3LLM/d3LLM_LLaDA

Finetuned
(17)
this model
Quantizations
2 models

Dataset used to train d3LLM/d3LLM_LLaDA

Paper for d3LLM/d3LLM_LLaDA