-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about test #14
Comments
When I implemented the code, I assumed the input size is 256*256. It is possible that some operation in the contextual attention layer makes it only compatible with the 256*256 image size. Could you provide more details in the log information to help me find the problem quickly? I can't run the code at this time. |
It seems that it is caused by some different padding in the contextual attention layer. You can compare the feature map's shape of each layer with Yu's version and find the layer. Or like what you have done, change the input shape to a compatible one. I am sorry that I don't have time to check it at this time. |
@kinfeparty Thanks for your question. Yes, the discriminators make it only compatible with fixed sizes of images since there are fully-connected layers at the last layer. You may remove that layer and compute the mean as the output to make it compatible with any size of the input. By the way, there are two discriminators, a global one with the whole image as the input and a local one with the mask region as the input. |
@daa233 Oh,sorry that I forgot the nn.linear! I just start to learn pytorch. Thanks for you replying. |
@daa233 Hello ,sorry to bother you again.Thanks for your advice I can start training with my dataset. |
I have no idea about the problem. But I think you can check the network outputs first since the results above are copy-paste from the network outputs. Refers to here. |
It's strange.I find that the mask of the output got a boundary but the input mask don't got it.I use cv2 to read the mask and the boundary disappears.But I don't know why the mask in examples don't get this boundary when I print it. |
Adding binarization after mask resizing reduces "gray edge" artifact...
https://github.com/daa233/generative-inpainting-pytorch/blob/master/test_single.py?plain=1#L64 |
Hello,thanks that you can provide the coda for us.
I read the code and I have some questions.
Is your code only suitable for 256*256 images?
I change the image_size in config.But in contextual layer it broke.
RuntimeError: Given transposed=1, weight of size [7238, 128, 4, 4], expected input[1, 7038, 46, 153] to have 7238 channels, but got 7038 channels instead.
In Yu's code, it can handle every size of images,so could you tell me what's the difference between yours and Yu's?
Thanks!
The text was updated successfully, but these errors were encountered: