You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Incorporation of ONNX Runtime:
Utilizing the ONNX Runtime can facilitate the deployment of machine learning models across various platforms, enhancing our system's flexibility and performance.
Adoption of libonnx: Employing libonnx, a lightweight, portable C99 ONNX inference engine, can optimize our operations on embedded devices, especially those with hardware acceleration support.
@lucaderi thanks for the article, I will study and update for clarifications.
P.S. I found interesting techniques and few research papers related to the new optimized techniques we can use with Tiny ML. I will update it ASAP once I finalize.Based on that we can plan next steps.
Incorporation of ONNX Runtime:
Utilizing the ONNX Runtime can facilitate the deployment of machine learning models across various platforms, enhancing our system's flexibility and performance.
https://onnxruntime.ai/docs/install/
Adoption of libonnx: Employing libonnx, a lightweight, portable C99 ONNX inference engine, can optimize our operations on embedded devices, especially those with hardware acceleration support.
https://github.com/xboot/libonnx
Why this is important:
AI/ML models can be develop using various technologies and easy to integrate with nDPI without conversions and can run top of ONNX
@IvanNardi This is as per our initial discussion, lets discuss in more detail and fine tune the idea to work with more portable and modular manner
The text was updated successfully, but these errors were encountered: