Spaces:
Paused
Paused
File size: 7,578 Bytes
34c9227 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 |
<div align="center">
<div align="center">
<h1>Reward Forcing: <br> Efficient Streaming Video Generation with <br> Rewarded Distribution Matching Distillation</h1>
<div>
<a href="#" target="_blank">Yunhong Lu</a><sup>1,2</sup>,
<a href="https://zengyh1900.github.io/" target="_blank">Yanhong Zeng</a><sup>2</sup>,
<a href="#" target="_blank">Haobo Li</a><sup>2,4</sup>,
<a href="https://ken-ouyang.github.io/" target="_blank">Hao Ouyang</a><sup>2</sup>,
<a href="https://github.com/qiuyu96" target="_blank">Qiuyu Wang</a><sup>2</sup>,
<a href="https://felixcheng97.github.io/" target="_blank">Ka Leong Cheng</a><sup>2</sup>,
<br>
<a href="#" target="_blank">Jiapeng Zhu</a><sup>2</sup>,
<a href="#" target="_blank">Hengyuan Cao</a><sup>1</sup>,
<a href="https://zhipengzhang.cn/" target="_blank">Zhipeng Zhang</a><sup>5</sup>,
<a href="https://openreview.net/profile?id=%7EXing_Zhu2" target="_blank">Xing Zhu</a><sup>2</sup>,
<a href="https://shenyujun.github.io/" target="_blank">Yujun Shen</a><sup>2</sup>,
<a href="#" target="_blank">Min Zhang</a><sup>1,3</sup>
</div>
<br>
<div>
<sup>1</sup>ZJU,
<sup>2</sup>Ant Group,
<sup>3</sup>SIAS-ZJU,
<sup>4</sup>HUST,
<sup>5</sup>SJTU
</div>
</div>
<br>
[](https://arxiv.org/abs/2512.04678)
[](https://reward-forcing.github.io/)
[](https://huggingface.co/JaydenLu666/Reward-Forcing-T2V-1.3B)
</div>
## π Progress
- [x] π Technical Report / Paper
- [x] π Project Homepage
- [x] π» Training & Inference Code
- [x] π€ Pretrained Model: T2V-1.3B
- [ ] π Pretrained Model: T2V-14B (In progress)
## π― Overview
<div align="center">
<img src="assets/teaser.png" width="800px">
</div>
> **TL;DR**: We propose Reward Forcing to distill a bidirectional video diffusion model into a 4-step autoregressive student model that enables real-time (23.1 FPS) streaming video generation. Instead of using vanilla distribution matching distillation (DMD), Reward Forcing adopts a novel rewarded distribution matching distillation (Re-DMD) that prioritizes matching towards high-reward regions, leading to enhanced object motion dynamics and immersive scene navigation dynamics in generated videos.
## π Table of Contents
- [Requirements](#-requirements)
- [Installation](#-installation)
- [Pretrained Checkpoints](#-pretrained-checkpoints)
- [Inference](#-inference)
- [Training](#-training)
- [Results](#-results)
- [Citation](#-citation)
- [Acknowledgements](#-acknowledgements)
- [Contact](#-contact)
## π§ Requirements
- GPU: NVIDIA GPU with at least 24GB memory for inference, 80GB memory for training.
- RAM: 64GB or more recommended.
- Linux operating system.
## π οΈ Installation
### Step 1: Clone the repository
```bash
git clone https://github.com/JaydenLyh/Reward-Forcing.git
cd Reward-Forcing
```
### Step 2: Create conda environment
```bash
conda create -n reward_forcing python=3.10
conda activate reward_forcing
```
### Step 3: Install dependencies
```bash
pip install -r requirements.txt
pip install flash-attn --no-build-isolation
```
### Step 4: Install the package
```bash
pip install -e .
```
## π¦ Pretrained Checkpoints
### Download Links
| Model | Download |
|-------|----------|
| VideoReward | [Hugging Face](https://huggingface.co/KlingTeam/VideoReward) |
| Wan2.1-T2V-1.3B | [Hugging Face](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) |
| Wan2.1-T2V-14B | [Hugging Face](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) |
| ODE Initialization | [Hugging Face](https://huggingface.co/gdhe17/Self-Forcing/blob/main/checkpoints/ode_init.pt) |
| Reward Forcing | [Hugging Face](https://huggingface.co/JaydenLu666/Reward-Forcing-T2V-1.3B) |
### File Structure
After downloading, organize the checkpoints as follows:
```
checkpoints/
βββ Videoreward/
β βββ checkpoint-11352/
β βββ model_config.json
βββ Wan2.1-T2V-1.3B/
βββ Wan2.1-T2V-14B/
βββ Reward-Forcing-T2V-1.3B/
βββ ode_init.pt
```
### Quick Download Script
```bash
pip install "huggingface_hub[cli]"
# Download all checkpoints
bash download_checkpoints.sh
```
## π Inference
### Quick Start
```bash
# 5-seconds video inference
python inference.py \
--num_output_frames 21 \
--config_path configs/reward_forcing.yaml \
--checkpoint_path checkpoints/Reward-Forcing-T2V-1.3B/rewardforcing.pt \
--output_folder videos/rewardforcing-5s \
--data_path prompts/MovieGenVideoBench_extended.txt \
--use_ema
# 30-seconds video inference
python inference.py \
--num_output_frames 120 \
--config_path configs/reward_forcing.yaml \
--checkpoint_path checkpoints/Reward-Forcing-T2V-1.3B/rewardforcing.pt \
--output_folder videos/rewardforcing-30s \
--data_path prompts/MovieGenVideoBench_extended.txt \
--use_ema
```
## ποΈ Training
### Multi-GPU Training
```bash
# bash train.sh
torchrun --nnodes=1 --nproc_per_node=8 --rdzv_id=5235 --rdzv_backend=c10d \
--rdzv_endpoint=$MASTER_PORT train.py --config_path configs/reward_forcing.yaml \
--logdir logs/reward_forcing \
--disable-wandb
```
### Multi-Node Training
```bash
torchrun --nnodes=$NODE_SIZE --nproc_per_node=8 --node-rank=$NODE_RANK --rdzv_id=5235 --rdzv_backend=c10d \
--rdzv_endpoint=$MASTER_IP:$MASTER_PORT train.py --config_path configs/reward_forcing.yaml \
--logdir logs/reward_forcing \
--disable-wandb
```
### Configuration Files
Training configurations are in `configs/`:
- `default_config.yaml`: Default configuration
- `reward_forcing.yaml`: Reward Forcing configuration
## π Results
### Quantitative Results
#### Performance on VBench
| Method | Total Score | Quality Score | Semantic Score | Params | FPS |
|--------|----------|----------|----------|--------|-----|
| SkyReels-V2 | 82.67 | 84.70 | 74.53 | 1.3B | 0.49 |
| MAGI-1 | 79.18 | 82.04 | 67.74 | 4.5B | 0.19 |
| NOVA | 80.12 | 80.39 | 79.05 | 0.6B | 0.88 |
| Pyramid Flow | 81.72 | 84.74 | 69.62 | 2B | 6.7 |
| CausVid | 82.88 | 83.93 | 78.69 | 1.3B | 17.0 |
| Self Forcing | 83.80 | 84.59 | 80.64 | 1.3B | 17.0 |
| LongLive | 83.22 | 83.68 | **81.37** | 1.3B | 20.7 |
| **Ours** | **84.13** | **84.84** | 81.32 | 1.3B | **23.1** |
### Qualitative Results
Visualizations can be found in our [Project Page](https://reward-forcing.github.io/).
## π Citation
If you find this work useful, please consider citing:
```bibtex
@article{lu2025reward,
title={Reward Forcing: Efficient Streaming Video Generation with Rewarded Distribution Matching Distillation},
author={Lu, Yunhong and Zeng, Yanhong and Li, Haobo and Ouyang, Hao and Wang, Qiuyu and Cheng, Ka Leong and Zhu, Jiapeng and Cao, Hengyuan and Zhang, Zhipeng and Zhu, Xing and others},
journal={arXiv preprint arXiv:2512.04678},
year={2025}
}
```
## π Acknowledgements
This project is built upon several excellent works: [CausVid](https://github.com/tianweiy/CausVid), [Self Forcing](https://github.com/guandeh17/Self-Forcing), [Infinite Forcing](https://github.com/SOTAMak1r/Infinite-Forcing), [Wan2.1](https://github.com/Wan-Video/Wan2.1), [VideoAlign](https://github.com/KlingTeam/VideoAlign)
We thank the authors for their great work and open-source contribution.
## π§ Contact
For questions and discussions, please:
- Open an issue on [GitHub Issues](https://github.com/JaydenLyh/Reward-Forcing/issues)
- Contact us at: [email protected]
|