We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am trying to export a joint_conformer_fastspeech2_hifigan model to be loaded as an ONNX model. The code I used:
from espnet_onnx.export import TTSModelExport m = TTSModelExport() m.export(text2speech, 'custom_model', quantize=True)
I also tried the above with a pre-trained model:
from espnet_onnx.export import TTSModelExport tag_name = 'kan-bayashi/ljspeech_joint_train_conformer_fastspeech2_hifigan' m = TTSModelExport() m.export_from_pretrained(tag_name)
But in both cases I got the same error below:
Traceback (most recent call last): File "/Users/vigourav/development/tts_bitbucket/tts_inference/inference_api.py", line 853, in <module> m.export(text2speech, 'custom_model', quantize=True) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/espnet_onnx/export/tts/export_tts.py", line 56, in export self._export_tts(tts_model, export_dir, verbose) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/espnet_onnx/export/tts/export_tts.py", line 173, in _export_tts self._export_model(model, verbose, path) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/espnet_onnx/export/tts/export_tts.py", line 154, in _export_model torch.onnx.export( File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/onnx/utils.py", line 516, in export _export( File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/onnx/utils.py", line 1613, in _export graph, params_dict, torch_out = _model_to_graph( File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/onnx/utils.py", line 1135, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/onnx/utils.py", line 1011, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/onnx/utils.py", line 915, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/jit/_trace.py", line 1296, in _get_trace_graph outs = ONNXTracedModule( File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/jit/_trace.py", line 138, in forward graph, out = torch._C._create_graph_by_tracing( File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/jit/_trace.py", line 129, in wrapper outs.append(self.inner(*trace_inputs)) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _slow_forward result = self.forward(*input, **kwargs) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/espnet_onnx/export/tts/models/tts_models/fastspeech2.py", line 146, in forward _, outs, d_outs, p_outs, e_outs = self._forward( File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/espnet_onnx/export/tts/models/tts_models/fastspeech2.py", line 195, in _forward p_outs = self.pitch_predictor(hs, d_masks.unsqueeze(-1)) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _slow_forward result = self.forward(*input, **kwargs) File "/Users/vigourav/anaconda3/envs/oracle_espnet/lib/python3.9/site-packages/espnet2/tts/fastspeech2/variance_predictor.py", line 84, in forward xs = xs.masked_fill(x_masks, 0.0) RuntimeError: masked_fill_ only supports boolean masks, but got mask with dtype float
Is Joint Fastspeech2 hifigan model not supported yet? If not, by when can we expect it?
The text was updated successfully, but these errors were encountered:
Hello, I've experienced same error when i am trying to export a JETS model to ONNX model a few days ago.
But it was worked fine in past, so I downgraded a few libraries to the old version like below.
espnet==202402
espnet==202308
torch==2.1.0
torch==1.13.1
torchaudio==2.1.0
torchaudio==0.13.1
espnet-onnx==0.2.0
And it works fine. So try downgrade some related libraries. I don't have time to look at the code, so I don't know the exact cause yet.
Sorry, something went wrong.
No branches or pull requests
I am trying to export a joint_conformer_fastspeech2_hifigan model to be loaded as an ONNX model. The code I used:
I also tried the above with a pre-trained model:
But in both cases I got the same error below:
Is Joint Fastspeech2 hifigan model not supported yet? If not, by when can we expect it?
The text was updated successfully, but these errors were encountered: