You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just run a demo program, in which I would love to perform on these data:
Actually, blue points are real data(x) while red points are z. The prior is N(mean of x, 0.01). Notice that I randomly set variance. Maybe 0.01 is larger than variance of x. Maybe smaller. I apply the default flows model in your ipynb. But loss varied from 800000->-30000->, and it is still decreasing. My question is how come negative loss would happen?
Plu, when I perform MAF/IAF,,,, loss would also be negative
The text was updated successfully, but these errors were encountered:
Hi @jlian2, I've been playing around with the code in the repo as well.
I've also observed negative loss values. Since the loss is the negative log-likelihood, in fact what this should mean is that the likelihood is enormous. I think this means that it is essentially very peaked. This may correspond to overfitting. I've found that you can often get good results if you stop the training early.
It's hard to get a sense for the dataset that you are working with. Is it essentially a Gaussian with outliers? It wouldn't surprise me in flows with a Gaussian base distribution would not handle outliers well. Maybe this is a direction for future research?
I just run a demo program, in which I would love to perform on these data:
Actually, blue points are real data(x) while red points are z. The prior is N(mean of x, 0.01). Notice that I randomly set variance. Maybe 0.01 is larger than variance of x. Maybe smaller. I apply the default flows model in your ipynb. But loss varied from 800000->-30000->, and it is still decreasing. My question is how come negative loss would happen?
Plu, when I perform MAF/IAF,,,, loss would also be negative
The text was updated successfully, but these errors were encountered: