You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When neural net code runs, typically one matrix is the parameters, which can be reorganized once and reused. The other matrix is typically activations, which must be prepared on every matrix multiply call. It would be interesting to have benchmarks that include the time for preparing the activation values.
The text was updated successfully, but these errors were encountered:
Yes, you are correct, we should factor in the Preparation functions. So far i have only benchmarked the best case scenario, which is rather unrealistic, since activations do not come prepared.
On the other hand I realised that OneDNN doesn't expose its prepare routine...
When neural net code runs, typically one matrix is the parameters, which can be reorganized once and reused. The other matrix is typically activations, which must be prepared on every matrix multiply call. It would be interesting to have benchmarks that include the time for preparing the activation values.
The text was updated successfully, but these errors were encountered: