KcELECTRA-base-finetuned-suicide-comments

Training Details

  • Data Split: 8:1:1 (Train: 7,119 / Validation: 890 / Test: 890)
  • Tokenizer: SentencePiece (KcELECTRA tokenizer)
  • Max Length: 256
  • Learning Rate: 3e-5
  • Batch Size: 16
  • Epochs: 6
  • Early Stopping: Patience = 2
  • Optimizer: AdamW
  • Threshold Optimization: Independent per-label tuning (criteria = Micro-F1, Macro-F1)
    • Thresholds: [0.25, 0.675, 0.8, 0.75, 0.7]

Result

Metric Value
Micro-F1 0.769
Macro-F1 0.758
Subset Accuracy 0.516

F1-score:

Emotion F1-score
Dislike 0.72
Sympathy 0.81
Sadness 0.64
Surprised 0.80
Angry 0.82

Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Kaaeun/Labeling

Finetuned
(11)
this model

Evaluation results