qwen35-27b-stage3-instruct

This model was fine-tuned using SFT.

W&B run: https://wandb.ai/cooawoo-personal/huggingface/runs/a1qd7i8u

Training procedure

Hyperparameters

Parameter Value
Learning rate 1e-05
LR scheduler SchedulerType.CONSTANT_WITH_WARMUP
Per-device batch size 1
Gradient accumulation 8
Effective batch size 8
Epochs 1
Max sequence length 6144
Optimizer OptimizerNames.PAGED_ADAMW_8BIT
Weight decay 0.01
Warmup ratio 0.03
Max gradient norm 1.0
Precision bf16
Loss type nll
Chunked cross-entropy yes

LoRA configuration

Parameter Value
Rank (r) 32
Alpha 16
Target modules down_proj, gate_proj, in_proj_a, in_proj_b, in_proj_qkv, in_proj_z, k_proj, o_proj, out_proj, q_proj, up_proj, v_proj
rsLoRA yes
Quantization 4-bit (nf4)

Dataset statistics

Dataset Samples Total tokens Trainable tokens
json/ToastyPigeon/Qwen3.5-27B-Stage2-Marvin-CPT 7,522 29,213,740 29,213,740
Training config
model_name_or_path: merged
output_dir: runs/qwen35-27b-stage3-instruct
attn_implementation: flash_attention_2
bf16: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
use_cce: true
model_parallel: true
max_memory:
  0: 18GiB
  1: 18GiB
chunked_mlp: true
chunked_mlp_chunks: 8
max_length: 6144
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
use_peft: true
load_in_4bit: true
bnb_4bit_quant_type: nf4
lora_r: 32
lora_alpha: 16
lora_dropout: 0.0
use_rslora: true
lora_target_modules:
- in_proj_qkv
- in_proj_z
- in_proj_a
- in_proj_b
- out_proj
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
data_config: configs/qwen35-27b-stage3-instruct/data.yaml
prepared_dataset: runs/qwen35-27b-stage3-instruct/prepared
auto_mask_reasoning: true
learning_rate: 1.0e-05
lr_scheduler_type: constant_with_warmup
warmup_ratio: 0.03
weight_decay: 0.01
max_grad_norm: 1.0
optim: paged_adamw_8bit
num_train_epochs: 1
logging_steps: 1
disable_tqdm: false
save_strategy: steps
save_steps: 250
save_total_limit: 3
report_to: wandb
run_name: qwen35-27b-stage3-instruct
Data config
datasets:
- path: json
  data_files: ToastyPigeon/Qwen3.5-27B-Stage2-Marvin-CPT
  split: train

Framework versions

  • PEFT 0.18.1
  • Loft: 0.1.0
  • Transformers: 5.2.0
  • Pytorch: 2.6.0+cu124
  • Datasets: 4.6.1
  • Tokenizers: 0.22.2
Downloads last month
-
Safetensors
Model size
28B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ToastyPigeon/Qwen3.5-27B-Stage3-Instruct