-
Notifications
You must be signed in to change notification settings - Fork 115
autograd.optim wrapper does not update state value #121
Comments
That's fine, it gives you the states table it uses so you can manually change things per weight tensor (if you wish), but you don't actually pass it back to any function. Here's how to use the optim wrapper:
|
The first test case works because the learning rate does not decay. However, in most cases we want the learning rate to decay, and that needs the number of iteration, which is stored in the variable states. Additionally, the moments matrices are also stored in the variable states. The call local grads, loss = optimizer(data, target) will always assume the number of iteration is zero (or one) and moment matrices are empty in every iteration. |
@eugenium there doesn't seem to be an issue. state is created once and then kept as a local variable in optimizer. |
Ah yea I see now. thanks. |
@ghostcow So, is there a complete working example (like the mnist one) on how to use Optim? |
It seems that autograd.optim wrapper doesn't update "state" value. It outputs "states", but it is impossible to use it iteratively.
The text was updated successfully, but these errors were encountered: