Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to specify output node names in build_model() for PyTorch ONNX export #293

Open
pcolangelo1 opened this issue May 12, 2023 · 0 comments

Comments

@pcolangelo1
Copy link

In the current implementation of the torch.onnx.export() method used via the build_model function, the output node names are set automatically by ONNX, making it difficult to programmatically select specific outputs. This becomes particularly problematic when working with graphs that have multiple outputs, such as LSTM models.

I am proposing adding an optional output_names parameter to the build_model function that would allow the user to specify the output node names manually. This would help facilitate better integration with programs that feed one ONNX graph with the output of another.

Another alternative (or default case) is to set output_names to the expected output names.

This request is specifically for PyTorch, but it could be extended to other frameworks as well.

Example function prototype with output_names:

def build_model(
    model: build.UnionValidModelInstanceTypes = None,
    inputs: Optional[Dict[str, Any]] = None,
    output_names: Optional[List[str]] = None,
    build_name: Optional[str] = None,
    cache_dir: str = build.DEFAULT_CACHE_DIR,
    monitor: bool = True,
    rebuild: Optional[str] = None,
    sequence: Optional[List[stage.Stage]] = None,
    quantization_samples: Collection = None,
    onnx_opset: int = build.DEFAULT_ONNX_OPSET,
) -> omodel.BaseModel:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant