Skip to content

Integrate ONNX Runtime with nDPI for AI model execution #2789

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mmanoj opened this issue Apr 3, 2025 · 2 comments
Open

Integrate ONNX Runtime with nDPI for AI model execution #2789

mmanoj opened this issue Apr 3, 2025 · 2 comments

Comments

@mmanoj
Copy link
Contributor

mmanoj commented Apr 3, 2025

Incorporation of ONNX Runtime:
Utilizing the ONNX Runtime can facilitate the deployment of machine learning models across various platforms, enhancing our system's flexibility and performance.

https://onnxruntime.ai/docs/install/

Adoption of libonnx: Employing libonnx, a lightweight, portable C99 ONNX inference engine, can optimize our operations on embedded devices, especially those with hardware acceleration support.

https://github.com/xboot/libonnx

Why this is important:

AI/ML models can be develop using various technologies and easy to integrate with nDPI without conversions and can run top of ONNX

@IvanNardi This is as per our initial discussion, lets discuss in more detail and fine tune the idea to work with more portable and modular manner

@lucaderi
Copy link
Member

@mmanoj
Copy link
Contributor Author

mmanoj commented Apr 10, 2025

@lucaderi thanks for the article, I will study and update for clarifications.

P.S. I found interesting techniques and few research papers related to the new optimized techniques we can use with Tiny ML. I will update it ASAP once I finalize.Based on that we can plan next steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants