You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However the integrator seems to always return 1 as alpha when the film is in "rgba". Is there a way to return some sort of transparency? Typically I'd like to add a background image behind it.
I see that sample should only return a mi.Spectrum which doesn't seem to be supporting rgba no?
For the reference images:
During the rendering, the SamplingIntegrator.sample method is used. As stated in the documentation, the second item of the return value is used for the alpha value. So if you want to have some background image you can replace the existing envmap in the scene description. The envmap isn't visible in the tutorial because the path integrator is defined with <boolean name="hide_emitters" value="true"/>. Replacing that value to false should do the trick.
For the RadianceFieldPRB rendered images:
The ADIntegrator.sample method defines the alpha channel. The second item (when not in the primal rendering) of the return value, valid, will fill the alpha channel. This return value is always set to True in RadianceFieldPRB so you can modify that line to fit your needs.
Actually I'm trying to return the transmittance (see T in image below, c is the color and sigma is the density), which from I understand is supposed to be alpha. The advantage of exposing T is I would be able to run differentiation on it, whereas the current alpha won't be differentiable no?
Should I just build a second integrator in order to obtain that value where eval_emission always returns 1 ? I can't seem to build an integrator that returns something else than mi.Spectrum which should be a mi.Color3f like in the tutorial example.
Hi,
I happen to come across this thread. I think Nicolas was trying to say that the alpha channel in image blocks is quite unrelated with the transmittance in the radiance field model. Actually the alpha channel is dummy in most cases.
Back to your task to "run differentiation on transmittance". Transmittance is a value defined for a ray segment: an integral between two points along the curve. The color returned by the radiance field model is essentially $L = L_{e1}T_1 + L_{e2}T_2 + L_{e3}T_3 + \cdots$, and i am not sure which $T$ we should return here.
If we write a custom integrator that returns your desired transmittance $T$ (say the transmittance from near to far, or from near to $\sigma_{max}$), you still need to include this $T$ in your loss function so gradients can back-propagate along this new path. If we try to use this $T$ to represent some sort of "depth", that means we also need a reference depth image as input.
Yeah sorry I was originally unclear, basically I wanted to return transmittance from the NeRF tutorial. I was originally wanting to return that value to add the a random background for data augmentation.
For some context, I'm trying to build a text to 3d model, ie given a text generate a 3d structure. Somewhat similar in idea to https://arxiv.org/abs/2112.01455 . This provides a paradigm where instead of having image annotations, I only need text.
In that paper, it mentions the importance of sparsity. It computes the transmittance of a ray by integrating from the ray origin, to infinity along its direction. Basically I think in the tutorial this corresponds to the last value of beta once we finish the loop.
If we try to use this to represent some sort of "depth", that means we also need a reference depth image as input.
Not really, I would like to use it as a regularization mechanism.
If i were to do this, i would just return $\beta$ as another output in integrator.sample(), and overload/modify common.py to use the info.
Especially in render_backward(), we need to set gradients for the "throughput" with the derivative computed from the regularization term.
This should be feasible, but people should have more elegant solutions :)
This discussion was converted from issue #310 on October 07, 2022 08:30.
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Summary
Hey! I've been trying to use Mitsuba 3 to train 3d models. I've been starting using the tutorial and in particular: https://mitsuba.readthedocs.io/en/latest/src/inverse_rendering/radiance_field_reconstruction.html#NeRF-like-PRB-integrator
However the integrator seems to always return 1 as alpha when the film is in "rgba". Is there a way to return some sort of transparency? Typically I'd like to add a background image behind it.
I see that sample should only return a
mi.Spectrum
which doesn't seem to be supporting rgba no?Beta Was this translation helpful? Give feedback.
All reactions