You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
Hi there, I am using nGraph to accelerate my model.
As my cpu is not Xeon series, I built nGraph and tensorflow from source inside Docker following Option 2 in README. The build succeeded and pass the model test. However, the inference time is much more slower when using nGraph backend.
Hi there, I am using nGraph to accelerate my model.
As my cpu is not Xeon series, I built nGraph and tensorflow from source inside Docker following Option 2 in README. The build succeeded and pass the model test. However, the inference time is much more slower when using nGraph backend.
Could anyone point out possible reason for this?
Btw, I notice there are setting recommendations for Xeon series. (https://ngraph.nervanasys.com/docs/latest/frameworks/generic-configs.html#ngraph-enabled-intel-xeon.) I am wondering if the environment parameter settings would affect a lot.
Any hint is highly appreciated!!
The text was updated successfully, but these errors were encountered: