Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark costs of preparing activations #3

Open
jlquinn opened this issue Sep 9, 2020 · 1 comment
Open

benchmark costs of preparing activations #3

jlquinn opened this issue Sep 9, 2020 · 1 comment

Comments

@jlquinn
Copy link

jlquinn commented Sep 9, 2020

When neural net code runs, typically one matrix is the parameters, which can be reorganized once and reused. The other matrix is typically activations, which must be prepared on every matrix multiply call. It would be interesting to have benchmarks that include the time for preparing the activation values.

@XapaJIaMnu
Copy link
Owner

Yes, you are correct, we should factor in the Preparation functions. So far i have only benchmarked the best case scenario, which is rather unrealistic, since activations do not come prepared.

On the other hand I realised that OneDNN doesn't expose its prepare routine...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants