Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using allura-org/MN-12b-RP-Ink as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: model_stock
base_model: allura-org/MN-12b-RP-Ink
models:
- model: allura-org/MN-12b-RP-Ink
- model: VongolaChouko/Starcannon-Unleashed-12B-v1.0
- model: anthracite-org/magnum-v4-12b
- model: nbeerbower/Lyra4-Gutenberg-12B
parameters:
int8_mask: true
dtype: bfloat16
tokenizer_source: base