Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
•
2311.03099
•
Published
•
30
This is a merge of pre-trained language models created using mergekit. This is one of my merge parts for EtherealRainbow-v0.2-8B. It's pretty unusable by itself (lots of bugs/issues) but merged into a different base it seems to increase response length and quality of prose.
Uploading just in case others find it useful.
This model was merged using the DARE TIES merge method using Gryphe/Pantheon-RP-1.0-8b-Llama-3 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
- model: aaditya/Llama3-OpenBioLLM-8B
parameters:
density: 0.36
weight: 0.2
- model: Blackroot/Llama-3-LongStory
parameters:
density: 0.40
weight: 0.3
- model: Locutusque/Llama-3-Hercules-5.0-8B
parameters:
density: 0.49
weight: 0.5
merge_method: dare_ties
base_model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
parameters:
int8_mask: true
dtype: bfloat16