-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[benchmarks] Add docs file. #8023
base: master
Are you sure you want to change the base?
Conversation
|
||
**PyTorch/XLA Metrics:** (repeat-specific) the flag `--dump-pytorch-xla-metrics` creates a | ||
new file, dumping PyTorch/XLA metrics, such as graph compiling and execution information. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a section on Nightly CI runs - TPU and GPU
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. But I don't really have any information on those nightly CI runs. What should it contain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zpcore can you please help Yukio with this section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
additional: ideally, we also want to run verifier as part of our tests
|
||
**PyTorch/XLA Metrics:** (repeat-specific) the flag `--dump-pytorch-xla-metrics` creates a | ||
new file, dumping PyTorch/XLA metrics, such as graph compiling and execution information. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we include a troubleshooting section? this should include the pitfalls we ran into as well as approaches to debug the benchmark efficiently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@miladm I don't think it is appropriate to document any resolved issue here. If there are any features that can be used to debug different performance issues we should absolutely make sure they are captured.
|
This PR introduces a docs/torchbench.md file. It explains how to use the benchmarking scripts for running, troubleshooting, and debugging the models inside Torchbench.