You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to evaluate the pre-trained models under "Efficient Wait-k Models for Simultaneous Machine Translation". For this, I followed the instructions given in the readme. Specifically, I did followings:
After downloading model and data and placing them under pre_saved:
cd ~/attn2d/pre_saved
tar xzf iwslt14_de_en.tar.gz
tar xzf tf_waitk_model.tar.gz
k=5 # Evaluation time k
output=wait$k.log
CUDA_VISIBLE_DEVICES=0 python generate.py pre_saved/iwslt14_deen_bpe10k_binaries/ -s de -t en --gen-subset test --path pre_saved/tf_waitk_model.tar.gz --task waitk_translation --eval-waitk $k --model-overrides "{'max_source_positions': 1024, 'max_target_positions': 1024}" --left-pad-source False --user-dir examples/waitk --no-progress-bar --max-tokens 8000 --remove-bpe --beam 1 2>&1 | tee -a $output
It generates following error message:
Traceback (most recent call last):
File "generate.py", line 11, in <module>
cli_main()
File "/home/attn2d/fairseq_cli/generate.py", line 276, in cli_main
parser = options.get_generation_parser()
File "/home/attn2d/fairseq/options.py", line 33, in get_generation_parser
parser = get_parser("Generation", default_task)
File "/home/attn2d/fairseq/options.py", line 197, in get_parser
utils.import_user_module(usr_args)
File "/home/attn2d/fairseq/utils.py", line 350, in import_user_module
importlib.import_module(module_name)
File "/home/anaconda3/envs/py37/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/attn2d/examples/waitk/__init__.py", line 1, in <module>
from . import models, tasks
File "/home/attn2d/examples/waitk/models/__init__.py", line 7, in <module>
importlib.import_module('examples.simultaneous.models.' + model_name)
File "/home/anaconda3/envs/py37/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'examples.simultaneous'
EDIT
Okay, here is more detail about this:
I believe thisline is responsible for the error message shared above.
I changed this importlib.import_module('examples.simultaneous.models.' + model_name) to importlib.import_module('examples.waitk.models.' + model_name)
Then, I got another error:
File "generate.py", line 11, in <module>
cli_main()
File "/home/attn2d/fairseq_cli/generate.py", line 276, in cli_main
parser = options.get_generation_parser()
File "/home/attn2d/fairseq/options.py", line 33, in get_generation_parser
parser = get_parser("Generation", default_task)
File "/home/attn2d/fairseq/options.py", line 197, in get_parser
utils.import_user_module(usr_args)
File "/home/attn2d/fairseq/utils.py", line 350, in import_user_module
importlib.import_module(module_name)
File "/home/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/attn2d/examples/waitk/__init__.py", line 1, in <module>
from . import models, tasks
File "/home/attn2d/examples/waitk/models/__init__.py", line 8, in <module>
importlib.import_module('examples.waitk.models.' + model_name)
File "/home/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/attn2d/examples/waitk/__init__.py", line 1, in <module>
from . import models, tasks
File "/home/attn2d/examples/waitk/models/__init__.py", line 8, in <module>
importlib.import_module('examples.waitk.models.' + model_name)
File "/home/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/attn2d/examples/waitk/models/waitk_transformer.py", line 24, in <module>
from examples.simultaneous.modules import TransformerEncoderLayer, TransformerDecoderLayer
So, I changed this line here to ```from examples.waitk.modules import TransformerEncoderLayer, ```` too. Then when I tried once more, I got the following error:
File "generate.py", line 11, in <module>
cli_main()
File "/home/attn2d/fairseq_cli/generate.py", line 276, in cli_main
parser = options.get_generation_parser()
File "/home/attn2d/fairseq/options.py", line 33, in get_generation_parser
parser = get_parser("Generation", default_task)
File "/home/attn2d/fairseq/options.py", line 197, in get_parser
utils.import_user_module(usr_args)
File "/home/attn2d/fairseq/utils.py", line 350, in import_user_module
importlib.import_module(module_name)
File "/home/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/attn2d/examples/waitk/__init__.py", line 1, in <module>
from . import models, tasks
File "/home/attn2d/examples/waitk/models/__init__.py", line 8, in <module>
importlib.import_module('examples.waitk.models.' + model_name)
File "/home/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/attn2d/examples/waitk/__init__.py", line 1, in <module>
from . import models, tasks
File "/home/attn2d/examples/waitk/models/__init__.py", line 8, in <module>
importlib.import_module('examples.waitk.models.' + model_name)
File "/home/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/attn2d/examples/waitk/models/waitk_transformer.py", line 25, in <module>
from examples.waitk.modules import TransformerEncoderLayer, TransformerDecoderLayer
File "/home/attn2d/examples/waitk/modules/__init__.py", line 2, in <module>
from .controller import Controller
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
@ereday@EricLina IIRC this error often comes up in Fairseq because of numpy version issues. If you upgrade numpy to 1.22 or something later it should be fixed (or at least it has worked for me numerous times when working on regular MT models). If you're still going down that rabbit hole, perhaps it's worth a try?
🐛 Bug
Hi,
I was trying to evaluate the pre-trained models under "Efficient Wait-k Models for Simultaneous Machine Translation". For this, I followed the instructions given in the readme. Specifically, I did followings:
After downloading model and data and placing them under
pre_saved
:It generates following error message:
EDIT
Okay, here is more detail about this:
I believe thisline is responsible for the error message shared above.
I changed this
importlib.import_module('examples.simultaneous.models.' + model_name)
toimportlib.import_module('examples.waitk.models.' + model_name)
Then, I got another error:
So, I changed this line here to ```from examples.waitk.modules import TransformerEncoderLayer, ```` too. Then when I tried once more, I got the following error:
So, to fix it I commented out following lines in examples/waitk/modules/init.py:
Next, I've tried to use the generation command given in the readme once more..
CUDA_VISIBLE_DEVICES=0 python generate.py pretrained-sources/iwslt14_deen_bpe10k_binaries/ -s de -t en --gen-subset test --path pretrained-sources/model.pt --task waitk_translation --eval-waitk $k --model-overrides "{'max_source_positions': 1024, 'max_target_positions': 1024}" --left-pad-source False --user-dir examples/waitk --no-progress-bar --max-tokens 8000 --remove-bpe --beam 1 2>&1 | tee -a $output
I got this error:
I just gave up after that.. @elbayadm I hope you can help me on this.
Code sample
Environment
I have followed the instructions in the README to install my environment. :
As a result, I have the following libraries in my environment:
Operating system: Linux
The text was updated successfully, but these errors were encountered: