Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About log operator in LMC equation #5

Open
zwxdxcm opened this issue Sep 16, 2024 · 3 comments
Open

About log operator in LMC equation #5

zwxdxcm opened this issue Sep 16, 2024 · 3 comments

Comments

@zwxdxcm
Copy link

zwxdxcm commented Sep 16, 2024

Hi,

Thanks for your contribution.
I am wondering that why there is not log operator in the codebase?
Here is code in lmc.py:

net_grad.mul_(self.a).add_(self.noise, alpha=self.b)
self.prev_samples.add_(net_grad)

But the equation(10) in paper is:
image

@shakibakh
Copy link
Collaborator

Hello @zwxdxcm,
Thank you for your interest in this work.
Great question. The gradient of log Q is equal to the gradient of Q divided by Q. I perform this division in line 280 of "examples/train_ngp_nerf_prop.py".

@zwxdxcm
Copy link
Author

zwxdxcm commented Sep 28, 2024

Thank you! but I still have a question... in line 280, the code is:

net_grad = data['points_2d'].grad.detach()
loss_per_pix = loss_per_pix.detach()
net_grad = net_grad / ((grad_scaler._scale * (correction * loss_per_pix).unsqueeze(1))+ torch.finfo(net_grad.dtype).eps)

here we ignore scale * eps. correction = 1/[Q(x)]^\alpha. netgrad is the gradiant of total loss. How does it equal to grad(Q(x)) / Q(x)? It seems like grad(L)/[1/[Q(x)]^\alpha * L]. Is there anything i ignored? Thank you again !

@zwxdxcm
Copy link
Author

zwxdxcm commented Sep 28, 2024

Here is my understanding:

  1. approximate correction(x) = Q(x)
    thus → ▽log(Q(x)) = ▽Q(x)/Q(x) = ▽correction(x)/correction(x)
    I dont know why to multiply loss_per_pix again given that there is loss_per_pix.mul_(correction) in line 267

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants