Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generative loss stuck #26

Open
tmralmeida opened this issue Jan 12, 2022 · 0 comments
Open

Generative loss stuck #26

tmralmeida opened this issue Jan 12, 2022 · 0 comments

Comments

@tmralmeida
Copy link

Hi,

Regarding the Social GAN model and while playing with your code, I found something that I couldn't understand.

E.g while running:

python -m trajnetbaselines.sgan.trainer --k 1

It means that we are running a vanilla GAN where the generator outputs one sample (the most common GAN setting without the L2 loss); In doing so, the GAN loss is always 1.38 throughout the training. Thus, the vanilla GAN (with only the adversarial loss) is not capable of modeling the data.

My question is to what extent are we taking advantage of a GAN framework? It seems that we are only training an LSTM predictor (when running under the aforementioned conditions).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant