samsum_42

This model is a fine-tuned version of google/t5-v1_1-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4548
  • Rouge1: 49.5823
  • Rouge2: 24.8158
  • Rougel: 41.1518
  • Rougelsum: 45.7853
  • Gen Len: 24.5073
  • Test Rougel: 41.1453
  • Df Rougel: 41.4491
  • Unlearn Overall Rougel: 0.3481
  • Unlearn Time: 2188.4693

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len Overall Rougel Unlearn Overall Rougel Time
No log 1.0 461 1.4838 49.0467 24.4627 41.2411 45.156 24.0746 0.2606 0.2606 -1
No log 2.0 922 1.4681 48.802 24.2156 41.2424 45.2122 23.3374 0.2373 0.2373 -1
1.9223 3.0 1383 1.4577 49.6453 24.7308 41.7833 45.825 24.2200 0.2138 0.2138 -1
1.9223 4.0 1844 1.4548 49.5823 24.8158 41.4491 45.7853 24.5073 0.3481 0.3481 -1
1.8084 5.0 2305 1.4532 49.5837 24.7344 41.6312 45.8172 24.3985 0.2665 0.2665 -1

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jialicheng/unlearn_samsum_t5-base_random_label_6_42

Finetuned
(55)
this model

Evaluation results