GT-GRPO: Qwen3-8B-Base trained on DAPO-14k
This model is a checkpoint of the GT-GRPO: Qwen3-8B-Base model, trained on the DAPO-14k dataset. It is part of the research presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
Paper Abstract Summary
The paper introduces Co-rewarding, a novel self-supervised reinforcement learning (RL) framework designed to enhance the reasoning ability of large language models (LLMs). It addresses the common issue of training collapse in self-rewarding methods by seeking complementary supervision from multiple views. Co-rewarding is instantiated in two ways: data-side (Co-rewarding-I) using contrastive agreement across semantically analogous questions, and model-side (Co-rewarding-II) via self-distillation with a slowly-updated reference teacher. This approach improves training stability and significantly outperforms other self-rewarding baselines on various mathematical reasoning benchmarks, sometimes even surpassing RLVR methods that use ground-truth labels.
GitHub Repository
For more details, including installation instructions, training procedures, and other released checkpoints and datasets related to the Co-rewarding framework, please refer to the official GitHub repository.
Citation
If you use our datasets or models, please cite our paper:
@article{zhang2025co,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
journal={arXiv preprint arXiv:2508.00410},
year={2025}
}
- Downloads last month
- 2