-
Notifications
You must be signed in to change notification settings - Fork 622
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question of implementation of inception score #39
Comments
ok , I know that inception model require the batch size to be 1 if we just run |
without the bias term ( as in the repo), I get 10.954855 +- 0.4320521 inception score on CIFAR-10(using test images). with the bias term ( as in the repo), I get 11.228305 +- 0.45700935 inception score on CIFAR-10(using test images). so I think that the bias term should be added. It matters. |
I also found this issue. I actually tested the inception score with bias
term and without inception score with a set of experiments: the basic
founding is that inception score with bias term is consistently higher than
the one without bias term, with a relatively fixed gap.
It does matter a lot, but the official implementation does not take it into
account, so maybe most other reported inception scores also do not. I
choose to report the score without the bias term to provide a fair
comparison.
I think it is indeed important to know whether there has consideration
on keep or drop the bias term from the original author.
…--
Zhiming Zhou
On Thu, Jun 28, 2018 at 4:20 PM, youkaichao ***@***.***> wrote:
without the bias term ( as in the repo), I get 10.954855 +- 0.4320521
inception score on CIFAR-10(using test images).
with the bias term ( as in the repo), I get 11.228305 +- 0.45700935
inception score on CIFAR-10(using test images).
so I think that the bias term should be added. It matters.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#39 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ALZ8sKmMUGeb9CvA8A5h5D8jVLzq9ldSks5uBJHegaJpZM4U64fa>
.
|
@ZhimingZhou @TimSalimans maybe the author just forgot the bias term? |
Hello, I am currently working on a GAN-related topic involving the calculation of the inception score, but when I use the source code of the author to run the real data of CIFAR10, the inception score is only (5.5425735, 0.059681736) (train data) & (5.5588408, 0.17018904) (test data), and I also used the pytorch version of the code for evaluation, the result is at 9.5+, this problem has been bothering me for a few days, and also checked a lot of information on the network, including issues There is no mention of the relevant details. So I am very confused. So I would like to ask you how to get the result of 11.24 in paper? Can you share your code or give me a hint? Thank you very much! |
I use your method,and get the code from that website.But it still did not work,and get this error: |
these lines puzzle me:
I'm wondering, why not just use
sess.graph.get_tensor_by_name('softmax:0')
? Why bother to manually do the matrix multiplication and apply softmax? also, why not add the bias term?The text was updated successfully, but these errors were encountered: