-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[🐛 bug report] Vertex Positions Loss Backpropagating Incorrectly #348
Comments
Hi @nauman-tintash , This is strange indeed. At iteration 1, the loss you are getting is very small and I am surprised the first gradient step causes such a difference on iteration 2. Could it be that your learning rate is too high? Also it would be useful to verify that the two renders are indeed identical (e.g. use the same random numbers), otherwise a noise difference in the primal images would cause non-zero gradients. You could for instance run the following: scene = load_file('bagel/scene.xml')
image_ref = render(scene, spp=2)
ob_val = ek.hsum(ek.sqr(image_ref - image_ref)) / len(image_ref ) to check whether Another thing that comes to my mind is that the One last thing: in the provided optimization examples using the |
I have the same system configuration with @nauman-tintash, but I meet this error:
and my code is:
|
You will most likely need to use the |
This is my new code:
|
Does it works with this new code? |
No! :(
The log:
|
It looks like the rendered image doesn't depend on the parameters that you are trying to optimize at all. Please check that the geometry Also you could use |
You mean the |
yes |
I have a try, the result is:
My code is:
|
Could you provide me with an example of geometry optimization? |
Please take a look at this one |
It seems that the folder |
Hello @HsiangYangChu @Speierers , Have you solved this problem? I am suffering exactly the same problem. Thank you! |
Summary
When I keep vertex positions buffer as optimization parameters, the loss is apparently being backpropogated incorrectly. Even if I keep the reference image and new image exactly the same, over a few iterations, the vertex positions keep getting updated and the loss keeps increasing.
System configuration
scalar_rgb
gpu_autodiff_rgb
Description
Even if my source and target scene is exactly the same, if I keep the vertex positions buffer as differentiable parameter, the loss keep getting increased. Following is a sample of losses and corresponding images for the first 4 iterations from one example.
Reference Render:
Iteration 1 : (Loss = 4.21295e-16)
Iteration 2 : (Loss = 0.000416789)
Iteration 3 : (Loss = 0.0170517)
Following is my relevant code snippet :
The text was updated successfully, but these errors were encountered: