You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper there is a hyperparameter $\alpha$ that is used to scale norm_grad. Usually it seems to be set like alpha = 1 / norm or alpha = 0.1 / norm (mentioned in Appendix C in the paper, in each task subsection). However, in the code it seems there is no such scaling done? Why is that the case / what difference does that make?
For example, for inpainting, the paper mentions $\alpha = 1 / ||y - P\hat{x}_0||$. Doesn't $ ||y - P\hat{x}_0||$ get smaller with increasing T? That would mean the gradient constraint gets stronger the closer you are at the end of sampling, which is not what I would usually expect.
The text was updated successfully, but these errors were encountered:
In the paper there is a hyperparameter$\alpha$ that is used to scale
norm_grad
. Usually it seems to be set likealpha = 1 / norm
oralpha = 0.1 / norm
(mentioned in Appendix C in the paper, in each task subsection). However, in the code it seems there is no such scaling done? Why is that the case / what difference does that make?For example, for inpainting, the paper mentions$\alpha = 1 / ||y - P\hat{x}_0||$ . Doesn't $ ||y - P\hat{x}_0||$ get smaller with increasing T? That would mean the gradient constraint gets stronger the closer you are at the end of sampling, which is not what I would usually expect.
The text was updated successfully, but these errors were encountered: