We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to split differentiable render results of an "aov" integrator?
I have a scene with an aov integrator defined as follows:
<integrator type="aov"> <string name="aovs" value="depth.y:depth"/> <integrator type="path"> <integer name="max_depth" value="$max_depth"/> </integrator> </integrator>
then I render the scene following the tutorial https://mitsuba2.readthedocs.io/en/latest/src/inverse_rendering/diff_render.html:
image = render(scene, optimizer=opt, unbiased=True, spp=1)
I want to optimize the depth and RGB channel separately in a gradient-based manner, so I tried:
render_list = image.split()
but it gives me the following error:
AttributeError: 'enoki.cuda_autodiff.Float32' object has no attribute 'split'
So what I want to ask is, what is the suitable way to split the aov rendering results while keeping them differentiable? Thank you in advance!
The text was updated successfully, but these errors were encountered:
Hi @Agent-INF Unfortunately this is not currently supported in the Python render() function, but will be in the upcoming next release of the codebase.
render()
You can take a look at the insides of the render() function and try to intercept the aov Float channels there.
Float
Note also that differentiating a depth buffer requires visibility discontinuies to be handled properly, e.g. see PR #157 .
Good luck!
Sorry, something went wrong.
Thank you very much for your reply! Looking forward to the upcoming next release!
No branches or pull requests
Summary
How to split differentiable render results of an "aov" integrator?
Description
I have a scene with an aov integrator defined as follows:
then I render the scene following the tutorial https://mitsuba2.readthedocs.io/en/latest/src/inverse_rendering/diff_render.html:
I want to optimize the depth and RGB channel separately in a gradient-based manner, so I tried:
but it gives me the following error:
So what I want to ask is, what is the suitable way to split the aov rendering results while keeping them differentiable?
Thank you in advance!
The text was updated successfully, but these errors were encountered: