-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paper & implementation differences #6
Comments
For (2), I think the authors apply the normalization factor before taking the gradient. If you look at I believe there's another difference between Alg. 1 of the paper and the code. In |
@berthyf96, for your second point regarding "EpsilonXMeanProcessor.predict_xstart", I also did not understand the difference until I realized that the score function |
@claroche-r thanks so much for clarifying that! |
thank you! |
Hi,
There are a few differences between the paper and this repository and it will be wonderful if you could clarify for me the reasons behind them:
sigma_y=0.05
, and indeed in the config filesconfig['noise']['sigma']=0.05
.But while the images are stretchered from [0,1] to [-1,1], the sigma is unchanged – meaning that in practice the noise added is with std
sigma/2
, i.e.y_n
is cleaner compared to the reported settings in the paper.This can be easily checked by computing
torch.std(y-yn)
after the creation ofy
andy_n
insample_condition.py
.In the code, the constant is defined in
config['conditioning']['params']['scale']
and used inPosteriorSampling.conditioning()
to scale the gradient, but we never normalized the gradient in the first place (inPosteriorSampling.grad_and_value()
for example).By adding the gradient normalization the method seems to break.
configs/super_resolution_config.yaml
uses 0.3.Thank you for your time and effort!
The text was updated successfully, but these errors were encountered: