You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, thanks for the author's good works!
I use your original code and train in this hyperparmeters: attention_probs_dropout_prob=0.1, batch_size=16, bert_ckpt_path='assets/bert-base-uncased-pytorch_model.bin', bert_config_path='assets/bert_config_base_uncased.json', data_root='data/mwz2.0', dec_lr=0.0001, dec_warmup=0.1, decoder_teacher_forcing=0.5, dev_data='dev_dials.json', dev_data_path='data/mwz2.0/dev_dials.json', dropout=0.1, enc_lr=4e-05, enc_warmup=0.1, eval_epoch=1, exclude_domain=False, hidden_dropout_prob=0.1, max_seq_length=256, msg=None, n_epochs=30, n_history=1, not_shuffle_state=False, num_workers=4, ontology_data='data/mwz2.0/ontology.json', op_code='4', random_seed=42, save_dir='outputs', shuffle_p=0.5, shuffle_state=True, slot_token='[SLOT]', test_data='test_dials.json', test_data_path='data/mwz2.0/test_dials.json', train_data='train_dials.json', train_data_path='data/mwz2.0/train_dials.json', vocab_path='assets/vocab.txt', word_dropout=0.1
actually, I only changed batch_size, and other same as yours.
But, the results were a little off the mark. op_code: 4, is_gt_op: False, is_gt_p_state: False, is_gt_gen: False
Epoch 20 joint accuracy : 0.2733441910966341
Epoch 20 slot turn accuracy : 0.9547819399203203
Epoch 20 slot turn F1: 0.8490504095046196
Epoch 20 op accuracy : 0.9708785740137087
Epoch 20 op F1 : {'delete': 0.0886426592797784, 'update': 0.7394006048941435, 'dontcare': 0.03618649965205289, 'carryover': 0.984787856819641}
Epoch 20 op hit count : {'delete': 16, 'update': 6723, 'dontcare': 26, 'carryover': 207838}
Epoch 20 op all count : {'delete': 339, 'update': 11265, 'dontcare': 1401, 'carryover': 208035}
Final Joint Accuracy : 0.08108108108108109
Final slot turn F1 : 0.8280395869977584
Latency Per Prediction : 21.975781 ms
I dont know what's wrong with it?
The text was updated successfully, but these errors were encountered:
Firstly, thanks for the author's good works!
I use your original code and train in this hyperparmeters:
attention_probs_dropout_prob=0.1, batch_size=16, bert_ckpt_path='assets/bert-base-uncased-pytorch_model.bin', bert_config_path='assets/bert_config_base_uncased.json', data_root='data/mwz2.0', dec_lr=0.0001, dec_warmup=0.1, decoder_teacher_forcing=0.5, dev_data='dev_dials.json', dev_data_path='data/mwz2.0/dev_dials.json', dropout=0.1, enc_lr=4e-05, enc_warmup=0.1, eval_epoch=1, exclude_domain=False, hidden_dropout_prob=0.1, max_seq_length=256, msg=None, n_epochs=30, n_history=1, not_shuffle_state=False, num_workers=4, ontology_data='data/mwz2.0/ontology.json', op_code='4', random_seed=42, save_dir='outputs', shuffle_p=0.5, shuffle_state=True, slot_token='[SLOT]', test_data='test_dials.json', test_data_path='data/mwz2.0/test_dials.json', train_data='train_dials.json', train_data_path='data/mwz2.0/train_dials.json', vocab_path='assets/vocab.txt', word_dropout=0.1
actually, I only changed batch_size, and other same as yours.
But, the results were a little off the mark.
op_code: 4, is_gt_op: False, is_gt_p_state: False, is_gt_gen: False
Epoch 20 joint accuracy : 0.2733441910966341
Epoch 20 slot turn accuracy : 0.9547819399203203
Epoch 20 slot turn F1: 0.8490504095046196
Epoch 20 op accuracy : 0.9708785740137087
Epoch 20 op F1 : {'delete': 0.0886426592797784, 'update': 0.7394006048941435, 'dontcare': 0.03618649965205289, 'carryover': 0.984787856819641}
Epoch 20 op hit count : {'delete': 16, 'update': 6723, 'dontcare': 26, 'carryover': 207838}
Epoch 20 op all count : {'delete': 339, 'update': 11265, 'dontcare': 1401, 'carryover': 208035}
Final Joint Accuracy : 0.08108108108108109
Final slot turn F1 : 0.8280395869977584
Latency Per Prediction : 21.975781 ms
I dont know what's wrong with it?
The text was updated successfully, but these errors were encountered: