-
Notifications
You must be signed in to change notification settings - Fork 382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loglikelihood Evaluation Question #79
Comments
Hello there! I did some searching, but unfortunately, I couldn't locate any demo code in the repository. However, I am happy to share how I did it: import k_diffusion as K
# Load your model and sigmas from your configuration
model = ... # Load your model
sigma_min, sigma_max = ... # Load sigmas from config
# Load a sample datapoint from your dataset
x, y = next(iter(dataloader))
# Calculate the log likelihood using k-diffusion
log_likelihood = K.sampling.log_likelihood(model, x, sigma_min, sigma_max)
# Interpretation: Higher values indicate higher likelihood, while lower values indicate lower likelihood Feel free to adjust the code to match your specific implementation and needs. Don't hesitate to ask if you have any further questions! Cheers! EDIT: |
Oh I forgot to answer this! |
Also a footgun to be aware of: if you are evaluating log likelihood in a distributed environment you should not use a DDP wrapper on the denoiser's inner model because I use an adaptive step size ODE solver to compute it and it will do different numbers of forward and backward passes per rank and this will cause hangs and other extremely bad behavior if there is a DDP wrapper. I spent most of yesterday debugging this, I was training a diffusion model and evaluating the log likelihood of the validation set every 10,000 steps. I hope this saves people some time... |
Could you provide an example? I get weird results, when I load like this: config = K.config.load_config(open(args.config))
model_config = config['model']
size = model_config['input_size']
accelerator = accelerate.Accelerator()
device = accelerator.device
print('Using device:', device, flush=True)
inner_model = K.config.make_model(config).eval().requires_grad_(False).to(device)
inner_model.load_state_dict(torch.load(args.checkpoint, map_location='cpu')['model_ema'])
accelerator.print('Parameters:', K.utils.n_params(inner_model))
model = K.Denoiser(inner_model, sigma_data=model_config['sigma_data'])
sigma_min = model_config['sigma_min']
sigma_max = model_config['sigma_max'] |
What sort of weird results? I think that should work, the problem I had was triggered by calling |
So I was getting values higher than expected (~7.500). If this is the log likelihood and not nll it would lead the following in my case: ll = K.sampling.log_likelihood(model, x, sigma_min, sigma_max)
torch.exp(ll) # --> tensor([inf, inf, inf, inf, inf], device='cuda:0') Should I rather provide inner_model instead of model? |
I sum over the dimensions of each batch item when computing log likelihood so it is the log likelihood of the entire example, not per dimension, so it can be very low or very high compared to what you would expect if it were per dimension. |
But |
Yes. If you want per-dimension log likelihoods you need to divide the returned log likelihoods by (3 * 32 * 32). |
|
Even then I still receive higher probabilities than 1. I feel like I am missing something.
|
Since diffusion models operate on continuous data, the probability of sampling any given data point is not really defined, so for "log likelihood" we are evaluating a log probability density function instead of returning a log probability. The pdf can take on higher values than 1 locally in a small region. |
Maybe the most useful value for you to compute is bits/dim? See the top Stack Overflow answer to this question for example, which talks about how to turn a continuous log likelihood (what k-diffusion gives you) into this value. https://stats.stackexchange.com/questions/423120/what-is-bits-per-dimension-bits-dim-exactly-in-pixel-cnn-papers |
I will look into it. Thank you so much! |
In Readme it is claimed that k-diffusion supports log likelihood calculation, is there any demo code in this repo?
The text was updated successfully, but these errors were encountered: