You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The models to be used by OnnxStream have to be first converted to ONNX format and then to TXT - that's as per the description. But if you are able to get the models in ONNX format, why not to use onnxruntime library as 3rd party to do the work? Is there a special reason?
The text was updated successfully, but these errors were encountered:
hi,
the main reason is that OnnxStream allows to "stream" the parameters of a
model. Therefore it allows to run very large models on devices with little
RAM (however sacrificing inference speed as a result).
The alternative is not being able to run these models on these devices at
all :-)
Vito
Hello
The models to be used by OnnxStream have to be first converted to ONNX format and then to TXT - that's as per the description. But if you are able to get the models in ONNX format, why not to use onnxruntime library as 3rd party to do the work? Is there a special reason?
The text was updated successfully, but these errors were encountered: