Commit
·
eb07498
1
Parent(s):
981e528
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
| 3 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
| 4 |
+
{}
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# Model Card for Model ID
|
| 8 |
+
|
| 9 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 10 |
+
|
| 11 |
+
This model is a GPT-J 6B fine-tuned on the TL;DR dataset using RLHF (reinforcement learning from human feedback), the same
|
| 12 |
+
technique that powers ChatGPT.
|
| 13 |
+
|
| 14 |
+
The TL;DR dataset is a summarization dataset, hence this model is fine-tuned for the summarization task as well.
|
| 15 |
+
|
| 16 |
+
This is likely the first open-source LLM fine-tuned on RLHF available publicly, thanks to Carper AI.
|
| 17 |
+
|
| 18 |
+
It aims to recreate the results of the [original paper by OpenAI](https://arxiv.org/abs/2009.01325).
|
| 19 |
+
|
| 20 |
+
# Model Details
|
| 21 |
+
|
| 22 |
+
- Base Model : GPT-J 6B
|
| 23 |
+
- Fine-Tuning Method : PPO, RLHF
|
| 24 |
+
- Fine-Tuning Dataset: TL;DR
|
| 25 |
+
- Fine-Tuning Task: Summarization
|
| 26 |
+
|
| 27 |
+
## Model Description
|
| 28 |
+
|
| 29 |
+
<!-- Provide a longer summary of what this model is. -->
|
| 30 |
+
|
| 31 |
+
- **Developed by:** Duy V. Phung, Ayush Thakur, Louis Castricato, Jonathan Tow, Alex Havrilla
|
| 32 |
+
- **Finetuned from model [optional]:** GPT-J 6B
|
| 33 |
+
|
| 34 |
+
## Model Sources [optional]
|
| 35 |
+
|
| 36 |
+
<!-- Provide the basic links for the model. -->
|
| 37 |
+
|
| 38 |
+
- **Repository:** https://github.com/CarperAI/trlx/tree/main/examples/summarize_rlhf
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
## Results
|
| 43 |
+
|
| 44 |
+
SFT vs PPO
|
| 45 |
+
|
| 46 |
+
__ROUGE scores__
|
| 47 |
+
|
| 48 |
+
| Model | Rouge-1 | Rouge-2 | Rouge-L | Average |
|
| 49 |
+
| --- | --- | --- | --- | --- |
|
| 50 |
+
| SFT | 0.334 | 0.125 | 0.261 | 0.240 |
|
| 51 |
+
| PPO | 0.323 | 0.109 | 0.238 | 0.223 |
|
| 52 |
+
|
| 53 |
+
__Reward scores__
|
| 54 |
+
|
| 55 |
+
| Model | Average Reward | Reward $\Delta$ |
|
| 56 |
+
| --- | --- | --- |
|
| 57 |
+
| SFT | 2.729 | -0.181 |
|
| 58 |
+
| PPO | 3.291 | +0.411 |
|