forked from CGCL-codes/naturalcc
-
Notifications
You must be signed in to change notification settings - Fork 0
/
all.log
147 lines (147 loc) · 27.5 KB
/
all.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
nohup: ignoring input
Using backend: pytorch
[2021-09-26 16:34:11] INFO >> Load arguments in /home/wanyao/yang/naturalcc-master/run/retrieval/birnn/config/csn/all.yml (train.py:302, cli_main())
[2021-09-26 16:34:11] INFO >> {'criterion': 'retrieval_softmax', 'optimizer': 'torch_adam', 'lr_scheduler': 'fixed', 'tokenizer': None, 'bpe': None, 'common': {'no_progress_bar': 0, 'log_interval': 500, 'log_format': 'simple', 'tensorboard_logdir': '', 'seed': 0, 'cpu': 0, 'fp16': 0, 'memory_efficient_fp16': 0, 'fp16_no_flatten_grads': 0, 'fp16_init_scale': 128, 'fp16_scale_window': None, 'fp16_scale_tolerance': 0.0, 'min_loss_scale': 0.0001, 'threshold_loss_scale': None, 'user_dir': None, 'empty_cache_freq': 0, 'all_gather_list_size': 16384, 'task': 'hybrid_retrieval'}, 'dataset': {'num_workers': 1, 'skip_invalid_size_inputs_valid_test': 0, 'max_tokens': None, 'max_sentences': 1000, 'code_max_tokens': 200, 'query_max_tokens': 30, 'required_batch_size_multiple': 8, 'dataset_impl': 'mmap', 'train_subset': 'train', 'valid_subset': 'valid', 'validate_interval': 1, 'fixed_validation_seed': None, 'disable_validation': None, 'max_tokens_valid': None, 'max_sentences_valid': 1000, 'curriculum': 0, 'gen_subset': 'test', 'num_shards': 1, 'shard_id': 0, 'joined_dictionary': 0, 'langs': ['go', 'java', 'javascript', 'ruby', 'python', 'php']}, 'distributed_training': {'distributed_world_size': 1, 'distributed_rank': 0, 'distributed_backend': 'nccl', 'distributed_init_method': None, 'distributed_port': -1, 'device_id': 0, 'distributed_no_spawn': 0, 'ddp_backend': 'c10d', 'bucket_cap_mb': 25, 'fix_batches_to_gpus': None, 'find_unused_parameters': 0, 'fast_stat_sync': 0, 'broadcast_buffers': 0, 'global_sync_iter': 50, 'warmup_iterations': 500, 'local_rank': -1, 'block_momentum': 0.875, 'block_lr': 1, 'use_nbm': 0, 'average_sync': 0}, 'task': {'data': '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all', 'sample_break_mode': 'complete', 'tokens_per_sample': 512, 'mask_prob': 0.15, 'leave_unmasked_prob': 0.1, 'random_token_prob': 0.1, 'freq_weighted_replacement': 0, 'mask_whole_words': 0, 'pooler_activation_fn': 'tanh', 'source_lang': 'code_tokens', 'target_lang': 'docstring_tokens', 'source_aux_lang': 'code_tokens.wo_func', 'target_aux_lang': 'func_name', 'fraction_using_func_name': 0.1, 'load_alignments': 0, 'left_pad_source': 1, 'left_pad_target': 0, 'upsample_primary': 1, 'truncate_source': 0, 'eval_mrr': 1}, 'model': {'arch': 'birnn', 'code_embed_dim': 128, 'code_doprout': 0.1, 'code_rnn_cell': 'lstm', 'code_rnn_layers': 2, 'code_hidden_dim': 64, 'code_rnn_dropout': 0.2, 'code_rnn_bidirectional': True, 'code_pooling': 'weighted_mean', 'code_activation_fn': 'tanh', 'code_paddding': 'same', 'query_embed_dim': 128, 'query_doprout': 0.1, 'query_rnn_cell': 'lstm', 'query_rnn_layers': 2, 'query_hidden_dim': 64, 'query_rnn_dropout': 0.2, 'query_rnn_bidirectional': True, 'query_pooling': 'weighted_mean', 'query_activation_fn': 'tanh', 'query_paddding': 'same', 'dropout': 0.1}, 'optimization': {'max_epoch': 300, 'max_update': 0, 'clip_norm': 1, 'sentence_avg': 0, 'update_freq': [1], 'lrs': [0.01], 'min_lr': -1, 'use_bmuf': 0, 'lr_shrink': 1.0, 'force_anneal': 0, 'warmup_updates': 0, 'end_learning_rate': 0.0, 'power': 1.0, 'total_num_update': 1000000, 'adam': {'adam_betas': '(0.9, 0.999)', 'adam_eps': 1e-08, 'weight_decay': 0, 'use_old_adam': 1}, 'margin': 1, 'clip_norm_version': 'tf_clip_by_global_norm'}, 'checkpoint': {'save_dir': '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints', 'restore_file': 'checkpoint_last.pt', 'reset_dataloader': None, 'reset_lr_scheduler': None, 'reset_meters': None, 'reset_optimizer': None, 'optimizer_overrides': '{}', 'save_interval': 1, 'save_interval_updates': 0, 'keep_interval_updates': 0, 'keep_last_epochs': -1, 'keep_best_checkpoints': -1, 'no_save': 0, 'no_epoch_checkpoints': 1, 'no_last_checkpoints': 0, 'no_save_optimizer_state': None, 'best_checkpoint_metric': 'mrr', 'maximize_best_checkpoint_metric': 1, 'patience': 5}, 'eval': {'path': '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_best.pt', 'quiet': 1, 'max_sentences': 1000, 'model_overrides': '{}', 'eval_size': -1}} (train.py:304, cli_main())
[2021-09-26 16:34:11] INFO >> single GPU training... (train.py:333, cli_main())
[2021-09-26 16:34:11] INFO >> loaded 89154 examples from: ['/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.go.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.java.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.javascript.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.ruby.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.python.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.php.code_tokens'] (hybrid_retrieval.py:54, load_tokens_dataset())
[2021-09-26 16:34:11] INFO >> loaded 89154 examples from: ['/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.go.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.java.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.javascript.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.ruby.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.python.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/valid.php.docstring_tokens'] (hybrid_retrieval.py:55, load_tokens_dataset())
[2021-09-26 16:34:11] INFO >> BiRNN(
(src_encoders): ModuleDict(
(go): RNNEncoder(
(embed): Embedding(10000, 128)
(weight_layer): Linear(in_features=128, out_features=1, bias=False)
(rnn): LSTM(128, 64, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
)
(java): RNNEncoder(
(embed): Embedding(10000, 128)
(weight_layer): Linear(in_features=128, out_features=1, bias=False)
(rnn): LSTM(128, 64, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
)
(javascript): RNNEncoder(
(embed): Embedding(10000, 128)
(weight_layer): Linear(in_features=128, out_features=1, bias=False)
(rnn): LSTM(128, 64, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
)
(ruby): RNNEncoder(
(embed): Embedding(10000, 128)
(weight_layer): Linear(in_features=128, out_features=1, bias=False)
(rnn): LSTM(128, 64, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
)
(python): RNNEncoder(
(embed): Embedding(10000, 128)
(weight_layer): Linear(in_features=128, out_features=1, bias=False)
(rnn): LSTM(128, 64, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
)
(php): RNNEncoder(
(embed): Embedding(10000, 128)
(weight_layer): Linear(in_features=128, out_features=1, bias=False)
(rnn): LSTM(128, 64, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
)
)
(tgt_encoders): RNNEncoder(
(embed): Embedding(10000, 128)
(weight_layer): Linear(in_features=128, out_features=1, bias=False)
(rnn): LSTM(128, 64, num_layers=2, batch_first=True, dropout=0.2, bidirectional=True)
)
) (train.py:223, single_main())
[2021-09-26 16:34:11] INFO >> model birnn, criterion SearchSoftmaxCriterion (train.py:224, single_main())
[2021-09-26 16:34:11] INFO >> num. model params: 10351488 (num. trained: 10351488) (train.py:225, single_main())
[2021-09-26 16:34:17] INFO >> training on 1 GPUs (train.py:233, single_main())
[2021-09-26 16:34:17] INFO >> max tokens per GPU = None and max sentences per GPU = 1000 (train.py:234, single_main())
[2021-09-26 16:34:17] INFO >> no existing checkpoint found /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_last.pt (ncc_trainers.py:270, load_checkpoint())
[2021-09-26 16:34:17] INFO >> loading train data for epoch 1 (ncc_trainers.py:285, get_train_iterator())
[2021-09-26 16:34:18] INFO >> loaded 1880853 examples from: ['/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.go.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.java.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.javascript.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.ruby.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.python.code_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.php.code_tokens'] (hybrid_retrieval.py:54, load_tokens_dataset())
[2021-09-26 16:34:18] INFO >> loaded 1880853 examples from: ['/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.go.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.java.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.javascript.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.ruby.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.python.docstring_tokens', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.php.docstring_tokens'] (hybrid_retrieval.py:55, load_tokens_dataset())
[2021-09-26 16:34:19] INFO >> loaded 1880853 examples from: ['/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.go.code_tokens.wo_func', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.java.code_tokens.wo_func', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.javascript.code_tokens.wo_func', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.ruby.code_tokens.wo_func', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.python.code_tokens.wo_func', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.php.code_tokens.wo_func'] (hybrid_retrieval.py:67, load_tokens_dataset())
[2021-09-26 16:34:19] INFO >> loaded 1880853 examples from: ['/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.go.func_name', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.java.func_name', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.javascript.func_name', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.ruby.func_name', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.python.func_name', '/home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/train.php.func_name'] (hybrid_retrieval.py:81, load_tokens_dataset())
[2021-09-26 16:34:21] INFO >> NOTE: your device may support faster training with fp16 (ncc_trainers.py:155, _setup_optimizer())
[2021-09-26 16:39:46] INFO >> epoch 001: 500 / 1881 loss=5.147, mrr=158.874, sample_size=1000, wps=1560.2, ups=1.56, wpb=1000, bsz=1000, num_updates=500, lr=0.01, gnorm=0.228, clip=0.2, train_wall=315, wall=329 (progress_bar.py:260, log())
[2021-09-26 16:45:06] INFO >> epoch 001: 1000 / 1881 loss=3.287, mrr=438.993, sample_size=1000, wps=1563.6, ups=1.56, wpb=1000, bsz=1000, num_updates=1000, lr=0.01, gnorm=0.212, clip=0, train_wall=315, wall=649 (progress_bar.py:260, log())
[2021-09-26 16:50:27] INFO >> epoch 001: 1500 / 1881 loss=2.761, mrr=528.036, sample_size=1000, wps=1558.7, ups=1.56, wpb=1000, bsz=1000, num_updates=1500, lr=0.01, gnorm=0.208, clip=0, train_wall=315, wall=969 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 16:54:30] INFO >> epoch 001 | loss 3.49 | mrr 0.413771 | sample_size 1000 | wps 1560.8 | ups 1.56 | wpb 1000 | bsz 1000 | num_updates 1880 | lr 0.01 | gnorm 0.214 | clip 0.1 | train_wall 1184 | wall 1213 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 16:54:49] INFO >> epoch 001 | valid on 'valid' subset | loss 2.847 | mrr 0.520456 | sample_size 1000 | wps 5548.2 | wpb 1000 | bsz 1000 | num_updates 1880 (progress_bar.py:269, print())
[2021-09-26 16:54:49] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_best.pt (epoch 1 @ 1880 updates, score 0.520456) (writing took 0.379316 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 16:56:11] INFO >> epoch 002: 120 / 1881 loss=2.505, mrr=570.792, sample_size=1000, wps=1453.3, ups=1.45, wpb=1000, bsz=1000, num_updates=2000, lr=0.01, gnorm=0.206, clip=0, train_wall=315, wall=1313 (progress_bar.py:260, log())
[2021-09-26 17:01:31] INFO >> epoch 002: 620 / 1881 loss=2.359, mrr=595.285, sample_size=1000, wps=1560, ups=1.56, wpb=1000, bsz=1000, num_updates=2500, lr=0.01, gnorm=0.207, clip=0, train_wall=315, wall=1634 (progress_bar.py:260, log())
[2021-09-26 17:06:50] INFO >> epoch 002: 1121 / 1881 loss=2.293, mrr=606.937, sample_size=1000, wps=1570.6, ups=1.57, wpb=1000, bsz=1000, num_updates=3000, lr=0.01, gnorm=0.207, clip=0, train_wall=313, wall=1952 (progress_bar.py:260, log())
[2021-09-26 17:12:08] INFO >> epoch 002: 1621 / 1881 loss=2.246, mrr=615.273, sample_size=1000, wps=1571.7, ups=1.57, wpb=1000, bsz=1000, num_updates=3500, lr=0.01, gnorm=0.207, clip=0, train_wall=313, wall=2270 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 17:14:52] INFO >> epoch 002 | loss 2.292 | mrr 0.606867 | sample_size 1000 | wps 1538.1 | ups 1.54 | wpb 1000 | bsz 1000 | num_updates 3760 | lr 0.01 | gnorm 0.207 | clip 0 | train_wall 1178 | wall 2435 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 17:15:10] INFO >> epoch 002 | valid on 'valid' subset | loss 2.578 | mrr 0.566506 | sample_size 1000 | wps 5735.4 | wpb 1000 | bsz 1000 | num_updates 3760 | best_mrr 566.506 (progress_bar.py:269, print())
[2021-09-26 17:15:11] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_best.pt (epoch 2 @ 3760 updates, score 0.566506) (writing took 0.660653 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 17:17:48] INFO >> epoch 003: 240 / 1881 loss=2.172, mrr=626.696, sample_size=1000, wps=1470.2, ups=1.47, wpb=1000, bsz=1000, num_updates=4000, lr=0.01, gnorm=0.208, clip=0, train_wall=311, wall=2611 (progress_bar.py:260, log())
[2021-09-26 17:23:05] INFO >> epoch 003: 740 / 1881 loss=2.137, mrr=633.331, sample_size=1000, wps=1575.3, ups=1.58, wpb=1000, bsz=1000, num_updates=4500, lr=0.01, gnorm=0.212, clip=0, train_wall=312, wall=2928 (progress_bar.py:260, log())
[2021-09-26 17:28:23] INFO >> epoch 003: 1240 / 1881 loss=2.126, mrr=635.66, sample_size=1000, wps=1571.9, ups=1.57, wpb=1000, bsz=1000, num_updates=5000, lr=0.01, gnorm=0.211, clip=0, train_wall=313, wall=3246 (progress_bar.py:260, log())
[2021-09-26 17:33:42] INFO >> epoch 003: 1741 / 1881 loss=2.117, mrr=636.961, sample_size=1000, wps=1570.4, ups=1.57, wpb=1000, bsz=1000, num_updates=5500, lr=0.01, gnorm=0.214, clip=0.2, train_wall=313, wall=3564 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 17:35:11] INFO >> epoch 003 | loss 2.127 | mrr 0.635125 | sample_size 1000 | wps 1542.7 | ups 1.54 | wpb 1000 | bsz 1000 | num_updates 5640 | lr 0.01 | gnorm 0.212 | clip 0.1 | train_wall 1175 | wall 3654 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 17:35:29] INFO >> epoch 003 | valid on 'valid' subset | loss 2.481 | mrr 0.582276 | sample_size 1000 | wps 5686.3 | wpb 1000 | bsz 1000 | num_updates 5640 | best_mrr 582.276 (progress_bar.py:269, print())
[2021-09-26 17:35:30] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_best.pt (epoch 3 @ 5640 updates, score 0.582276) (writing took 0.693883 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 17:39:24] INFO >> epoch 004: 360 / 1881 loss=2.071, mrr=644.157, sample_size=1000, wps=1460.4, ups=1.46, wpb=1000, bsz=1000, num_updates=6000, lr=0.01, gnorm=0.214, clip=0, train_wall=314, wall=3907 (progress_bar.py:260, log())
[2021-09-26 17:44:42] INFO >> epoch 004: 861 / 1881 loss=2.066, mrr=645.309, sample_size=1000, wps=1570.4, ups=1.57, wpb=1000, bsz=1000, num_updates=6500, lr=0.01, gnorm=0.218, clip=0, train_wall=313, wall=4225 (progress_bar.py:260, log())
[2021-09-26 17:50:01] INFO >> epoch 004: 1361 / 1881 loss=2.073, mrr=644.192, sample_size=1000, wps=1571.5, ups=1.57, wpb=1000, bsz=1000, num_updates=7000, lr=0.01, gnorm=0.229, clip=0.2, train_wall=313, wall=4543 (progress_bar.py:260, log())
[2021-09-26 17:55:20] INFO >> epoch 004: 1861 / 1881 loss=2.071, mrr=645.345, sample_size=1000, wps=1565.8, ups=1.57, wpb=1000, bsz=1000, num_updates=7500, lr=0.01, gnorm=0.218, clip=0, train_wall=314, wall=4863 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 17:55:33] INFO >> epoch 004 | loss 2.067 | mrr 0.645242 | sample_size 1000 | wps 1538.4 | ups 1.54 | wpb 1000 | bsz 1000 | num_updates 7520 | lr 0.01 | gnorm 0.22 | clip 0.1 | train_wall 1179 | wall 4876 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 17:55:51] INFO >> epoch 004 | valid on 'valid' subset | loss 2.434 | mrr 0.590014 | sample_size 1000 | wps 5686.3 | wpb 1000 | bsz 1000 | num_updates 7520 | best_mrr 590.014 (progress_bar.py:269, print())
[2021-09-26 17:55:52] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_best.pt (epoch 4 @ 7520 updates, score 0.590014) (writing took 0.669444 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 18:01:02] INFO >> epoch 005: 481 / 1881 loss=2.033, mrr=650.716, sample_size=1000, wps=1461.4, ups=1.46, wpb=1000, bsz=1000, num_updates=8000, lr=0.01, gnorm=0.227, clip=0, train_wall=313, wall=5205 (progress_bar.py:260, log())
[2021-09-26 18:06:20] INFO >> epoch 005: 981 / 1881 loss=2.048, mrr=648.764, sample_size=1000, wps=1572.7, ups=1.57, wpb=1000, bsz=1000, num_updates=8500, lr=0.01, gnorm=0.226, clip=0, train_wall=313, wall=5523 (progress_bar.py:260, log())
[2021-09-26 18:11:39] INFO >> epoch 005: 1481 / 1881 loss=2.043, mrr=649.363, sample_size=1000, wps=1569.5, ups=1.57, wpb=1000, bsz=1000, num_updates=9000, lr=0.01, gnorm=0.229, clip=0.2, train_wall=313, wall=5841 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 18:15:54] INFO >> epoch 005 | loss 2.045 | mrr 0.649044 | sample_size 1000 | wps 1539.6 | ups 1.54 | wpb 1000 | bsz 1000 | num_updates 9400 | lr 0.01 | gnorm 0.228 | clip 0.1 | train_wall 1178 | wall 6097 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 18:16:13] INFO >> epoch 005 | valid on 'valid' subset | loss 2.414 | mrr 0.592853 | sample_size 1000 | wps 5499.1 | wpb 1000 | bsz 1000 | num_updates 9400 | best_mrr 592.853 (progress_bar.py:269, print())
[2021-09-26 18:16:14] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_best.pt (epoch 5 @ 9400 updates, score 0.592853) (writing took 0.667222 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 18:17:22] INFO >> epoch 006: 100 / 1881 loss=2.049, mrr=648.692, sample_size=1000, wps=1455.6, ups=1.46, wpb=1000, bsz=1000, num_updates=9500, lr=0.01, gnorm=0.232, clip=0.2, train_wall=314, wall=6185 (progress_bar.py:260, log())
[2021-09-26 18:22:41] INFO >> epoch 006: 600 / 1881 loss=2.032, mrr=650.787, sample_size=1000, wps=1568.5, ups=1.57, wpb=1000, bsz=1000, num_updates=10000, lr=0.01, gnorm=0.237, clip=0.2, train_wall=314, wall=6504 (progress_bar.py:260, log())
[2021-09-26 18:27:58] INFO >> epoch 006: 1101 / 1881 loss=2.046, mrr=648.715, sample_size=1000, wps=1575.6, ups=1.58, wpb=1000, bsz=1000, num_updates=10500, lr=0.01, gnorm=0.245, clip=0.2, train_wall=312, wall=6821 (progress_bar.py:260, log())
[2021-09-26 18:33:17] INFO >> epoch 006: 1601 / 1881 loss=2.058, mrr=647.072, sample_size=1000, wps=1570, ups=1.57, wpb=1000, bsz=1000, num_updates=11000, lr=0.01, gnorm=0.25, clip=0.2, train_wall=313, wall=7139 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 18:36:15] INFO >> epoch 006 | loss 2.046 | mrr 0.648788 | sample_size 1000 | wps 1539.6 | ups 1.54 | wpb 1000 | bsz 1000 | num_updates 11280 | lr 0.01 | gnorm 0.248 | clip 0.3 | train_wall 1177 | wall 7318 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 18:36:34] INFO >> epoch 006 | valid on 'valid' subset | loss 2.425 | mrr 0.591987 | sample_size 1000 | wps 5650.6 | wpb 1000 | bsz 1000 | num_updates 11280 | best_mrr 591.987 (progress_bar.py:269, print())
[2021-09-26 18:36:34] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_last.pt (epoch 6 @ 11280 updates, score 0.591987) (writing took 0.307302 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 18:38:58] INFO >> epoch 007: 220 / 1881 loss=2.055, mrr=647.524, sample_size=1000, wps=1465.5, ups=1.47, wpb=1000, bsz=1000, num_updates=11500, lr=0.01, gnorm=0.421, clip=0.6, train_wall=313, wall=7481 (progress_bar.py:260, log())
[2021-09-26 18:44:16] INFO >> epoch 007: 721 / 1881 loss=2.045, mrr=648.794, sample_size=1000, wps=1570.4, ups=1.57, wpb=1000, bsz=1000, num_updates=12000, lr=0.01, gnorm=0.261, clip=0.4, train_wall=313, wall=7799 (progress_bar.py:260, log())
[2021-09-26 18:49:35] INFO >> epoch 007: 1221 / 1881 loss=2.071, mrr=645.322, sample_size=1000, wps=1570.1, ups=1.57, wpb=1000, bsz=1000, num_updates=12500, lr=0.01, gnorm=0.262, clip=0, train_wall=313, wall=8117 (progress_bar.py:260, log())
[2021-09-26 18:54:53] INFO >> epoch 007: 1721 / 1881 loss=2.077, mrr=644.02, sample_size=1000, wps=1572.4, ups=1.57, wpb=1000, bsz=1000, num_updates=13000, lr=0.01, gnorm=0.273, clip=0.4, train_wall=313, wall=8435 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 18:56:35] INFO >> epoch 007 | loss 2.063 | mrr 0.646218 | sample_size 1000 | wps 1542 | ups 1.54 | wpb 1000 | bsz 1000 | num_updates 13160 | lr 0.01 | gnorm 0.319 | clip 0.5 | train_wall 1177 | wall 8537 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 18:56:53] INFO >> epoch 007 | valid on 'valid' subset | loss 2.426 | mrr 0.591538 | sample_size 1000 | wps 5682.3 | wpb 1000 | bsz 1000 | num_updates 13160 | best_mrr 591.538 (progress_bar.py:269, print())
[2021-09-26 18:56:53] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_last.pt (epoch 7 @ 13160 updates, score 0.591538) (writing took 0.370397 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 19:00:34] INFO >> epoch 008: 340 / 1881 loss=2.072, mrr=644.087, sample_size=1000, wps=1463.7, ups=1.46, wpb=1000, bsz=1000, num_updates=13500, lr=0.01, gnorm=0.388, clip=2.2, train_wall=313, wall=8777 (progress_bar.py:260, log())
[2021-09-26 19:05:50] INFO >> epoch 008: 840 / 1881 loss=2.093, mrr=641.478, sample_size=1000, wps=1582.4, ups=1.58, wpb=1000, bsz=1000, num_updates=14000, lr=0.01, gnorm=0.373, clip=1.6, train_wall=311, wall=9093 (progress_bar.py:260, log())
[2021-09-26 19:11:08] INFO >> epoch 008: 1341 / 1881 loss=2.18, mrr=626.39, sample_size=1000, wps=1571.3, ups=1.57, wpb=1000, bsz=1000, num_updates=14500, lr=0.01, gnorm=1010.03, clip=51.4, train_wall=313, wall=9411 (progress_bar.py:260, log())
[2021-09-26 19:16:28] INFO >> epoch 008: 1841 / 1881 loss=2.378, mrr=595.057, sample_size=1000, wps=1566.9, ups=1.57, wpb=1000, bsz=1000, num_updates=15000, lr=0.01, gnorm=399.188, clip=98.2, train_wall=314, wall=9730 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 19:16:53] INFO >> epoch 008 | loss 2.194 | mrr 0.624625 | sample_size 1000 | wps 1542.4 | ups 1.54 | wpb 1000 | bsz 1000 | num_updates 15040 | lr 0.01 | gnorm 376.295 | clip 42.7 | train_wall 1176 | wall 9756 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 19:17:12] INFO >> epoch 008 | valid on 'valid' subset | loss 2.592 | mrr 0.56467 | sample_size 1000 | wps 5695 | wpb 1000 | bsz 1000 | num_updates 15040 | best_mrr 564.67 (progress_bar.py:269, print())
[2021-09-26 19:17:12] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_last.pt (epoch 8 @ 15040 updates, score 0.56467) (writing took 0.408855 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 19:22:09] INFO >> epoch 009: 461 / 1881 loss=2.389, mrr=592.714, sample_size=1000, wps=1464.9, ups=1.46, wpb=1000, bsz=1000, num_updates=15500, lr=0.01, gnorm=375.958, clip=95.2, train_wall=313, wall=10072 (progress_bar.py:260, log())
[2021-09-26 19:27:26] INFO >> epoch 009: 961 / 1881 loss=2.429, mrr=586.032, sample_size=1000, wps=1579, ups=1.58, wpb=1000, bsz=1000, num_updates=16000, lr=0.01, gnorm=1510.01, clip=80.8, train_wall=311, wall=10388 (progress_bar.py:260, log())
[2021-09-26 19:32:41] INFO >> epoch 009: 1461 / 1881 loss=2.468, mrr=578.961, sample_size=1000, wps=1583.5, ups=1.58, wpb=1000, bsz=1000, num_updates=16500, lr=0.01, gnorm=1.532, clip=30.8, train_wall=311, wall=10704 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 19:37:09] INFO >> epoch 009 | loss 2.438 | mrr 0.58429 | sample_size 1000 | wps 1546.4 | ups 1.55 | wpb 1000 | bsz 1000 | num_updates 16920 | lr 0.01 | gnorm 507.775 | clip 68.1 | train_wall 1173 | wall 10972 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 19:37:27] INFO >> epoch 009 | valid on 'valid' subset | loss 2.682 | mrr 0.549795 | sample_size 1000 | wps 5683.6 | wpb 1000 | bsz 1000 | num_updates 16920 | best_mrr 549.795 (progress_bar.py:269, print())
[2021-09-26 19:37:28] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_last.pt (epoch 9 @ 16920 updates, score 0.549795) (writing took 0.398877 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 19:38:21] INFO >> epoch 010: 80 / 1881 loss=2.465, mrr=579.721, sample_size=1000, wps=1472.6, ups=1.47, wpb=1000, bsz=1000, num_updates=17000, lr=0.01, gnorm=26.988, clip=58.8, train_wall=311, wall=11044 (progress_bar.py:260, log())
[2021-09-26 19:43:34] INFO >> epoch 010: 581 / 1881 loss=2.472, mrr=577.982, sample_size=1000, wps=1595.4, ups=1.6, wpb=1000, bsz=1000, num_updates=17500, lr=0.01, gnorm=1.501, clip=26.8, train_wall=308, wall=11357 (progress_bar.py:260, log())
[2021-09-26 19:48:54] INFO >> epoch 010: 1081 / 1881 loss=2.519, mrr=569.629, sample_size=1000, wps=1564.6, ups=1.56, wpb=1000, bsz=1000, num_updates=18000, lr=0.01, gnorm=178.593, clip=94.8, train_wall=314, wall=11677 (progress_bar.py:260, log())
[2021-09-26 19:54:09] INFO >> epoch 010: 1581 / 1881 loss=2.583, mrr=559.282, sample_size=1000, wps=1588.6, ups=1.59, wpb=1000, bsz=1000, num_updates=18500, lr=0.01, gnorm=4223.2, clip=99.8, train_wall=309, wall=11991 (progress_bar.py:260, log())
Using backend: pytorch
[2021-09-26 19:57:20] INFO >> epoch 010 | loss 2.542 | mrr 0.566373 | sample_size 1000 | wps 1552.5 | ups 1.55 | wpb 1000 | bsz 1000 | num_updates 18800 | lr 0.01 | gnorm 29327.9 | clip 75.3 | train_wall 1168 | wall 12183 (progress_bar.py:269, print())
Using backend: pytorch
[2021-09-26 19:57:38] INFO >> epoch 010 | valid on 'valid' subset | loss 2.8 | mrr 0.528845 | sample_size 1000 | wps 5697.5 | wpb 1000 | bsz 1000 | num_updates 18800 | best_mrr 528.845 (progress_bar.py:269, print())
[2021-09-26 19:57:39] INFO >> saved checkpoint /home/wanyao/ncc_data/codesearchnet/retrieval/data-mmap/all/birnn/checkpoints/checkpoint_last.pt (epoch 10 @ 18800 updates, score 0.528845) (writing took 0.400885 seconds) (checkpoint_utils.py:79, save_checkpoint())
[2021-09-26 19:57:39] INFO >> early stop since valid performance hasn't improved for last 5 runs (train.py:190, should_stop_early())
[2021-09-26 19:57:39] INFO >> early stop since valid performance hasn't improved for last 5 runs (train.py:271, single_main())
[2021-09-26 19:57:39] INFO >> done training in 12197.5 seconds (train.py:283, single_main())