KcELECTRA-base-finetuned-suicide-comments
Training Details
- Data Split: 8:1:1 (Train: 7,119 / Validation: 890 / Test: 890)
- Tokenizer: SentencePiece (KcELECTRA tokenizer)
- Max Length: 256
- Learning Rate: 3e-5
- Batch Size: 16
- Epochs: 6
- Early Stopping: Patience = 2
- Optimizer: AdamW
- Threshold Optimization: Independent per-label tuning (criteria = Micro-F1, Macro-F1)
- Thresholds: [0.25, 0.675, 0.8, 0.75, 0.7]
Result
| Metric | Value |
|---|---|
| Micro-F1 | 0.769 |
| Macro-F1 | 0.758 |
| Subset Accuracy | 0.516 |
F1-score:
| Emotion | F1-score |
|---|---|
| Dislike | 0.72 |
| Sympathy | 0.81 |
| Sadness | 0.64 |
| Surprised | 0.80 |
| Angry | 0.82 |
- Downloads last month
- 1
Model tree for Kaaeun/Labeling
Base model
beomi/KcELECTRA-baseEvaluation results
- micro-f1 on suicide_related_news_comments_koself-reported0.769
- macro-f1 on suicide_related_news_comments_koself-reported0.758
- subset-accuracy on suicide_related_news_comments_koself-reported0.516