-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About reproducing #11
Comments
Hi! I am sorry for answering that late. Yes, we indeed preprocessed the dataset (as specified in the Experiments section) with the procedure described in Section 3.3/Appendix C/Algorithm 1. Here is the script that we used to preprocess the datasets (we use a threshold of 0.95 for LHQ). Also note, that in our Table 1, we provide results for the 256x256 resolution, not for 1024x1024 (for 1024x1024, our model achieved FID/∞-FID of 10.11/10.53). That's strange that the initial FID was so high. But maybe those "unconnectable" images (that are removed by our preprocessing procedure) really change the distribution so much. |
Hmm, the currently released dataset has images sorted by their InceptionV3 likelihood (from least to most probable). I think in the above script, one should also shuffle the images. I will update it |
@universome Hi, some confusion about your jupyter of preprocessing. |
btw, if we get a subset of data, How to divide it into training set and test set ? and calculate fid just on test set in subset for your paper result? |
Hi,
Thank you for your great work! Have you ever cleaned the lhq dataset? I used the lhq_1024_jpg dataset to reproduce the effect, and the FID can only reach 9. I tried to fine-tune on the open source model, but the initial FID was as high as 24.
Best,
JiKun
The text was updated successfully, but these errors were encountered: