Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(backend): mps should not use non_blocking #6549

Merged
merged 2 commits into from
Jun 27, 2024

Commits on Jun 27, 2024

  1. fix(backend): mps should not use non_blocking

    We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
    - pytorch/pytorch#107455
    - https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28
    
    Changes:
    - Add properties for each device on `TorchDevice` as a convenience.
    - Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
    - Update model patching and caching APIs to use this new utility.
    
    Fixes: #6545
    psychedelicious committed Jun 27, 2024
    Configuration menu
    Copy the full SHA
    c7562dd View commit details
    Browse the repository at this point in the history
  2. ruff format

    RyanJDick committed Jun 27, 2024
    Configuration menu
    Copy the full SHA
    14775cc View commit details
    Browse the repository at this point in the history