You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, @Avhirup , thank you very much for your project. I am new to the task of 'image inpainting', and I noticed that most of papers of this project adopt another loss, called perceptual loss by calculating the difference of reconstructed image and original image with the feature vector extracted from a pre-trained CNN (e.g., VGG16). Could you please show me how to add such loss into your model or recommend me some other implementation you know to realize such tasks? Thank you very much~~
The text was updated successfully, but these errors were encountered:
Hey @RobinCSIRO, Unfortunately, i have not read any paper follow the method you are describing. The current SOTA uses two discriminators. Trained to minimize adversarial loss. Also a weighted reconstruction loss is applied just to the generator.(I guess this is similar to what you are talking about). In simple form reconstruction, the loss could be just the mean of the difference between your GT and generators output.
Try looking at this paper, http://hi.cs.waseda.ac.jp/~iizuka/projects/completion/en/
Also it would be great if you could revert me with the paper you are talking about
Hi, @Avhirup , thank you very much for your project. I am new to the task of 'image inpainting', and I noticed that most of papers of this project adopt another loss, called perceptual loss by calculating the difference of reconstructed image and original image with the feature vector extracted from a pre-trained CNN (e.g., VGG16). Could you please show me how to add such loss into your model or recommend me some other implementation you know to realize such tasks? Thank you very much~~
The text was updated successfully, but these errors were encountered: