You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run this on Ubuntu using the Anaconda enviroment provided. I successfully ran the bash ./pull_data.sh and was able to create the anaconda enviroment.
Note:
There were some adjustment required howerver, for example I had to downgrade astor to 0.7.1 (pip install astor==0.7.1) and had to manually install xgboost using conda install -c conda-forge xgboost or pip install xgboost.
My Input:
bash ./scripts/atis/train.sh 0
Console Ouptut:
Decoding: 100%|██████████| 491/491 [00:34<00:00, 14.04it/s]
Traceback (most recent call last):
File "/home/avi/Documents/tranX/exp.py", line 570, in <module>
train(args)
File "/home/avi/Documents/tranX/exp.py", line 150, in train
eval_results = evaluation.evaluate(dev_set.examples, model, evaluator, args,
File "/home/avi/Documents/tranX/evaluation.py", line 61, in evaluate
eval_result = evaluator.evaluate_dataset(examples, decode_results, fast_mode=eval_top_pred_only, args=args)
TypeError: evaluate_dataset() got an unexpected keyword argument 'args'
Namespace(seed=0, cuda=False, lang='python', asdl_file=None, mode='test', parser='default_parser', transition_system='python2', evaluator='default_evaluator', lstm='lstm', embed_size=128, action_embed_size=128, field_embed_size=64, type_embed_size=64, hidden_size=256, ptrnet_hidden_dim=32, att_vec_size=256, no_query_vec_to_action_map=False, readout='identity', query_vec_to_action_diff_map=False, sup_attention=False, no_parent_production_embed=False, no_parent_field_embed=False, no_parent_field_type_embed=False, no_parent_state=False, no_input_feed=False, no_copy=False, column_att='affine', answer_prune=True, vocab=None, glove_embed_path=None, train_file=None, dev_file=None, pretrain=None, batch_size=10, dropout=0.0, word_dropout=0.0, decoder_word_dropout=0.3, primitive_token_label_smoothing=0.0, src_token_label_smoothing=0.0, negative_sample_type='best', valid_metric='acc', valid_every_epoch=1, log_every=10, save_to='model', save_all_models=False, patience=5, max_num_trial=10, uniform_init=None, glorot_init=False, clip_grad=5.0, max_epoch=-1, optimizer='Adam', lr=0.001, lr_decay=0.5, lr_decay_after_epoch=0, decay_lr_every_epoch=False, reset_optimizer=False, verbose=False, eval_top_pred_only=False, load_model='saved_models/atis/model.atis.sup.lstm.hidden256.embed128.action128.field32.type32.dropout0.3.lr_decay0.5.beam5.vocab.freq2.bin.train.bin.glorot.with_par_info.no_copy.ls0.1.seed0.bin', beam_size=5, decode_max_time_step=110, sample_size=5, test_file='data/atis/test.bin', save_decode_to='decodes/atis/model.atis.sup.lstm.hidden256.embed128.action128.field32.type32.dropout0.3.lr_decay0.5.beam5.vocab.freq2.bin.train.bin.glorot.with_par_info.no_copy.ls0.1.seed0.bin.test.decode', features=None, load_reconstruction_model=None, load_paraphrase_model=None, load_reranker=None, tie_embed=False, train_decode_file=None, test_decode_file=None, dev_decode_file=None, metric='accuracy', num_workers=1, load_decode_results=None, unsup_loss_weight=1.0, unlabeled_file=None, example_preprocessor=None, sql_db_file=None)
load model from [saved_models/atis/model.atis.sup.lstm.hidden256.embed128.action128.field32.type32.dropout0.3.lr_decay0.5.beam5.vocab.freq2.bin.train.bin.glorot.with_par_info.no_copy.ls0.1.seed0.bin]
Traceback (most recent call last):
File "/home/avi/Documents/tranX/exp.py", line 576, in <module>
test(args)
File "/home/avi/Documents/tranX/exp.py", line 464, in test
params = torch.load(args.load_model, map_location=lambda storage, loc: storage)
File "/home/avi/anaconda3/envs/tranx/lib/python3.9/site-packages/torch/serialization.py", line 594, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/avi/anaconda3/envs/tranx/lib/python3.9/site-packages/torch/serialization.py", line 230, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/avi/anaconda3/envs/tranx/lib/python3.9/site-packages/torch/serialization.py", line 211, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'saved_models/atis/model.atis.sup.lstm.hidden256.embed128.action128.field32.type32.dropout0.3.lr_decay0.5.beam5.vocab.freq2.bin.train.bin.glorot.with_par_info.no_copy.ls0.1.seed0.bin'
The text was updated successfully, but these errors were encountered:
After some digging, I found that there are a few definitions of evaluate_dataset that have different parameters. I was able to get past that error by simply adding , args=None to every function header for evaluate_dataset that didn't already have one. I'll make a pull request so you can see the changes I made directly.
I am trying to run this on Ubuntu using the Anaconda enviroment provided. I successfully ran the
bash ./pull_data.sh
and was able to create the anaconda enviroment.Note:
There were some adjustment required howerver, for example I had to downgrade astor to 0.7.1 (
pip install astor==0.7.1
) and had to manually install xgboost usingconda install -c conda-forge xgboost
orpip install xgboost
.My Input:
bash ./scripts/atis/train.sh 0
Console Ouptut:
The text was updated successfully, but these errors were encountered: