Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Commit

Permalink
adhere to lazy import rules (#806)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #806

Lazy import changes `Python` import semantics, specifically when it comes to initialization of packages/modules: https://www.internalfb.com/intern/wiki/Python/Cinder/Onboarding/Tutorial/Lazy_Imports/Troubleshooting/

For example, this pattern is not guaranteed to work:

```
import torch.optim
...
torch.optim._multi_tensor.Adam   # may fail to resolve _multi_tensor
```

And this is guaranteed to work:

```
import torch.optim._multi_tensor
...
torch.optim._multi_tensor.Adam   # will always work
```

A recent change to `PyTorch` changed module initialization logic in a way that exposed this issue.

But the code has been working for years? This is the nature of undefined behavior, any change in the environment (in this the `PyTorch` code base can make it fail.

Differential Revision: D58881291
  • Loading branch information
fbgheith authored and facebook-github-bot committed Jun 21, 2024
1 parent 08a82e8 commit a122a44
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion classy_vision/optim/adamw_mt.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from typing import Any, Dict, Tuple

import torch.optim
from torch.optim import _multi_tensor

from . import ClassyOptimizer, register_optimizer

Expand All @@ -30,7 +31,7 @@ def __init__(
self._amsgrad = amsgrad

def prepare(self, param_groups) -> None:
self.optimizer = torch.optim._multi_tensor.AdamW(
self.optimizer = _multi_tensor.AdamW(
param_groups,
lr=self._lr,
betas=self._betas,
Expand Down

0 comments on commit a122a44

Please sign in to comment.