You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the current implementation of the torch.onnx.export() method used via the build_model function, the output node names are set automatically by ONNX, making it difficult to programmatically select specific outputs. This becomes particularly problematic when working with graphs that have multiple outputs, such as LSTM models.
I am proposing adding an optional output_names parameter to the build_model function that would allow the user to specify the output node names manually. This would help facilitate better integration with programs that feed one ONNX graph with the output of another.
Another alternative (or default case) is to set output_names to the expected output names.
This request is specifically for PyTorch, but it could be extended to other frameworks as well.
In the current implementation of the torch.onnx.export() method used via the build_model function, the output node names are set automatically by ONNX, making it difficult to programmatically select specific outputs. This becomes particularly problematic when working with graphs that have multiple outputs, such as LSTM models.
I am proposing adding an optional output_names parameter to the build_model function that would allow the user to specify the output node names manually. This would help facilitate better integration with programs that feed one ONNX graph with the output of another.
Another alternative (or default case) is to set output_names to the expected output names.
This request is specifically for PyTorch, but it could be extended to other frameworks as well.
Example function prototype with output_names:
The text was updated successfully, but these errors were encountered: