Version_concise_ASAP_FineTuningBERT_AugV12_k3_task1_organization_k3_k3_fold4

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0307
  • Qwk: 0.4300
  • Mse: 1.0307
  • Rmse: 1.0152

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 1.0 4 7.4588 0.0018 7.4588 2.7311
No log 2.0 8 5.6748 0.0147 5.6748 2.3822
No log 3.0 12 3.1219 0.0040 3.1219 1.7669
No log 4.0 16 2.0769 0.0443 2.0769 1.4412
No log 5.0 20 1.3324 0.0420 1.3324 1.1543
No log 6.0 24 1.1297 0.0420 1.1297 1.0629
No log 7.0 28 0.9878 0.0893 0.9878 0.9939
No log 8.0 32 0.8485 0.3041 0.8485 0.9212
No log 9.0 36 1.2866 0.2224 1.2866 1.1343
No log 10.0 40 0.7212 0.4504 0.7212 0.8493
No log 11.0 44 0.7540 0.3932 0.7540 0.8683
No log 12.0 48 1.4693 0.2655 1.4693 1.2122
No log 13.0 52 0.8152 0.3987 0.8152 0.9029
No log 14.0 56 0.6546 0.5020 0.6546 0.8091
No log 15.0 60 1.1949 0.3535 1.1949 1.0931
No log 16.0 64 0.6309 0.4675 0.6309 0.7943
No log 17.0 68 0.7855 0.4193 0.7855 0.8863
No log 18.0 72 0.9035 0.4336 0.9035 0.9505
No log 19.0 76 0.6346 0.5751 0.6346 0.7966
No log 20.0 80 1.5135 0.3608 1.5135 1.2303
No log 21.0 84 0.7148 0.5558 0.7148 0.8455
No log 22.0 88 1.4888 0.3392 1.4888 1.2202
No log 23.0 92 0.7341 0.5106 0.7341 0.8568
No log 24.0 96 0.7437 0.5145 0.7437 0.8624
No log 25.0 100 1.1277 0.3793 1.1277 1.0619
No log 26.0 104 0.6928 0.5499 0.6928 0.8323
No log 27.0 108 1.2308 0.3657 1.2308 1.1094
No log 28.0 112 0.6680 0.5682 0.6680 0.8173
No log 29.0 116 1.2037 0.3691 1.2037 1.0971
No log 30.0 120 0.6844 0.5727 0.6844 0.8273
No log 31.0 124 1.5364 0.2895 1.5364 1.2395
No log 32.0 128 1.0686 0.4135 1.0686 1.0337
No log 33.0 132 0.6534 0.5681 0.6534 0.8084
No log 34.0 136 1.1260 0.3803 1.1260 1.0611
No log 35.0 140 0.6955 0.5569 0.6955 0.8340
No log 36.0 144 1.0050 0.4228 1.0050 1.0025
No log 37.0 148 0.7496 0.5306 0.7496 0.8658
No log 38.0 152 0.8950 0.4803 0.8950 0.9461
No log 39.0 156 1.0307 0.4300 1.0307 1.0152

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for genki10/Version_concise_ASAP_FineTuningBERT_AugV12_k3_task1_organization_k3_k3_fold4

Finetuned
(6203)
this model

Evaluation results