You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @songbo0925
Thanks for your attention on our paper. The weight is shared between dis_a and dis_b, so as gen_a and gen_b.
Therefore, only one optimizer is needed.
Hi @layumi
Thanks for your wonderful work and reply.
In line 180-181 of trainer.py, gen_b is set to be the same as gen_a just this once. But in each iterations, if just the parameters gen_a is updated, then in next forward gen_b and gen_a may have different parameters. So I want to know how you achieve weight sharing?So as dis_a and dis_b. Maybe I ignored some code, please advise.
In trainer.py, why only update the parameters of
dis_a
andgen_a
and ignore the parameters ofdis_b
andgen_b
?DG-Net/trainer.py
Lines 242 to 248 in a067be1
The text was updated successfully, but these errors were encountered: