Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mapper networks in RadialGAN #305

Open
MargerieHD opened this issue Nov 26, 2024 · 0 comments
Open

Mapper networks in RadialGAN #305

MargerieHD opened this issue Nov 26, 2024 · 0 comments

Comments

@MargerieHD
Copy link

Question

RadialGAN: Mappers functions, is there a difference between the paper and the code ?

Further Information

In the paper, the mapper functions F are described as mapping from a target space D_i to the latent space Z. However, in the code (_train_epoch_mapper function), the loss function is the MSE between the real_X and the mapper[domain](generator[other_domain](noise). Therefore is the mapper functions mapping directly from one domain to the other?

def _train_epoch_mapper(
self,
domain: int,
X: torch.Tensor,
) -> float:
batch_size = len(X)
if batch_size == 0:
return 0
# Update the M network
self.mappers[domain].optimizer.zero_grad()
real_X = X.to(self.device)
noise = torch.randn(batch_size, self.n_units_latent, device=self.device)
errs = []
for other_domain in self.domains:
if other_domain == domain:
continue
fake = self.generators[other_domain](
noise
) # generate fake data for <other_domain>
fake = self.mappers[domain](fake) # remap data to domain <domain>
# Calculate M's loss based on this output
errM = nn.MSELoss()(fake, real_X)
errs.append(errM)
# Calculate gradients for M
errM = 0.1 * torch.sqrt(torch.stack(errs)).mean()
errM.backward()

Image from the article

Here we see that F maps from the image domain to the latent space
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant