You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing this wonderful code first!
And I have a small question in discriminator.I find the adversarial loss of AdaptSegNet is very unstable because of the global alignment in the segmentation output.I add non-local attention in the discriminator,but the performance drops dramatically.Then your discriminator,you say 'we follow the PatchGAN and deploy the multi-scale discrimimator model'.So for what consideration you utilize this strategy,and do you do an experiment to see how much performance improvement does this approach bring?
The text was updated successfully, but these errors were encountered:
Hi @gong-lei
Thanks for your attention. I do not compare the MS-Discriminator with Single-scale Discriminator.
The main reason is the experience in my previous work of CVPR19 (https://github.com/layumi/DGNet).
Multi-scale discriminator works well in image generation, so I continue to use it.
Besides, the non-local operation is hard to tune. I also have tried it but the performance is not good.
Thank you for sharing this wonderful code first!
And I have a small question in discriminator.I find the adversarial loss of AdaptSegNet is very unstable because of the global alignment in the segmentation output.I add non-local attention in the discriminator,but the performance drops dramatically.Then your discriminator,you say 'we follow the PatchGAN and deploy the multi-scale discrimimator model'.So for what consideration you utilize this strategy,and do you do an experiment to see how much performance improvement does this approach bring?
The text was updated successfully, but these errors were encountered: