-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update packaging in AOTI path #896
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/896
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 32c5807 with merge base f20f5e7 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
b23d414
to
24a1760
Compare
24a1760
to
db26ea0
Compare
db26ea0
to
0cf4e99
Compare
0cf4e99
to
6ec5b35
Compare
94c0079
to
50c6617
Compare
adb33fb
to
0146b38
Compare
Weird that the pin bump in #1144 doesn't have the same errors as we see here.... Side question: Is it possible for us to push -l 2 into the dso as metadata as well or does that require looping through pytorch/pytorch? It's an arg when running the binary, but we know AOT what the arg should be |
4311d16
to
937ab9a
Compare
f285434
to
2c8b7b5
Compare
925febe
to
271f1d1
Compare
b9d6ef5
to
eae63fe
Compare
9ff2e7d
to
93e6727
Compare
93e6727
to
5cd2862
Compare
This command fails out of the box. I had to create the directory |
command to build the runner fails
|
That should be fixed with pytorch/pytorch#138919 |
Seems like we need to bump the pin to pytorch as well? |
https://github.com/pytorch/torchchat/pull/1319/files There's a bump here that's not actively being worked on, seems like it's not a free bump (+ the infra issue we're seeing here) |
currently running into this issue when trying to run using the runner.
|
path issue. Just need to update the instructions in the first comment so it's
|
Added a
aoti_package
path, dependent on pytorch/pytorch#129895. Follow up will be to delete the--output-dso-path
.To export, use the
--output-aoti-package-path
to specify a file with a.pt2
extension. This will generate an artifact containing all the AOTI generated filespython3 torchchat.py export stories15M --output-aoti-package-path exportedModels/stories15M_artifacts_cpu.pt2 --device cpu
If we look into the contents of the package, we can see the following:
To run with Python:
python3 torchchat.py generate stories15M --aoti-package-path exportedModels/stories15M_artifacts_cpu.pt2 --prompt "Hello my name is"
To run with the C++ runner, first build the runner binary:
To run:
AOTI changes in PyTorch also now allow users to not need to specify --device when generating, as we will save this information during export time as metadata and use it during runtime to determine what device to use. A followup can be to extend the metadata to save model information so that users do not need to specify tokenizer version to use during runtime.