You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to get started with implementing the rendering of spectral line emission as well as measurement with a sensor that has a bandpass filter attached. An example would be measuring the spectral emission of hydrogen due to excitation/recombination processes where the emission is a single spectral line (or a narrow hat if that fits better into the mitsuba paradigm) and a camera with a D-alpha line filter. I may also need the line emission to be measured as RGB values to simulate image formation when there is no filter on the camera.
To get started, I looked into the irregular spectrum and from what I can tell this should be okay for the spectral response of the camera, but inadequate for the emission I'm trying to model given that it is spatially varying (it's volumetric and highly localised in my case). I can easily implement a discrete spectrum plugin, but that does not bring spatial variation.
To that end, what I'd like to ask is (based on your experience with Mitsuba 3's internals):
What's the idiomatic way to implement a spectral texture that stores a number of discrete spectra at spatially varying points (potentially 3d, but if I can get 2d working, then 3d should simply be an extension)
How best to sample this spectral information? Is it possible to load an arbitrary length array (multichannel texture) at runtime, and then sample from it? If so, what datatypes are best used for this purpose since most of them appear to be specialised to fixed-width arrays?
How to read/write discrete spectral emission values? From python, this is straightforward since it can go through the tensor pipeline and be loaded directly into memory, but OpenEXR as a standard supports storage of spectral values, does the current implementation support exposing these spectral values?
How should I handle conversion to sRGB as the current sRGB conversion gives negative values for sharp spectra?<- this could perhaps be done outside mitsuba so I'm not too concerned about it.
What is a good way to use the scene emission spectra as part of the wavelength sampling process during rendering? I'd like to avoid being entirely at the mercy of NEE when an RGB sensor samples wavelengths that are almost guaranteed to not overlap with the emission spectra of a discrete spectral emitter.
I know these are a lot of questions and I'm absolutely not expecting them to be answered, but I am a bit stuck on the implementation especially since I'm not as familiar with the best practices with regards to some of the datatypes and what they can/cannot be used for. I'm hoping that with some ideas and input, I can get this off the ground and implemented.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am trying to get started with implementing the rendering of spectral line emission as well as measurement with a sensor that has a bandpass filter attached. An example would be measuring the spectral emission of hydrogen due to excitation/recombination processes where the emission is a single spectral line (or a narrow hat if that fits better into the mitsuba paradigm) and a camera with a D-alpha line filter. I may also need the line emission to be measured as RGB values to simulate image formation when there is no filter on the camera.
To get started, I looked into the irregular spectrum and from what I can tell this should be okay for the spectral response of the camera, but inadequate for the emission I'm trying to model given that it is spatially varying (it's volumetric and highly localised in my case). I can easily implement a discrete spectrum plugin, but that does not bring spatial variation.
To that end, what I'd like to ask is (based on your experience with Mitsuba 3's internals):
I know these are a lot of questions and I'm absolutely not expecting them to be answered, but I am a bit stuck on the implementation especially since I'm not as familiar with the best practices with regards to some of the datatypes and what they can/cannot be used for. I'm hoping that with some ideas and input, I can get this off the ground and implemented.
Beta Was this translation helpful? Give feedback.
All reactions