Implementation of "Large Steps in Inverse Rendering of Geometry" in Mitsuba3 #600
Replies: 6 comments 32 replies
-
Thanks for the tutorial! I find it works and helpful! Seems that suzanne.ply is not uploaded. I created a version using Blender, and want to put it here suzanne.ply.zip in case anyone is interested in. The I can run the code on my machine and find it really helpful. |
Beta Was this translation helpful? Give feedback.
-
This is awesome thank you! I was wondering what torch version you were using though, since mine crash in the optimization loop when executing |
Beta Was this translation helpful? Give feedback.
-
Thanks for sharing your implementation ! We are working on integrating largesteps within Mitsuba directly, in order to avoid the unnecessary dependency on pytorch. A few notes :
|
Beta Was this translation helpful? Give feedback.
-
Thank you all for the great sharing.
reparam.mp4
I thought produce the problem because exist invalid vertex when optimize geometry vertex, the maybe exist hole? that may be fix by repeatedly remesh and clamp vertex position? |
Beta Was this translation helpful? Give feedback.
-
Hey, I am very new to Mitsuba. I am playing around with this implementation in an attempt to create a 3D mesh of a human face using one or more 2D images. Trying to use this implementation I have run into the following problems: I wonder if it is possible to use several images to create the complete 3D mesh, and just not one face of it. From my understanding the reference is simply a .jpg image of suzanne that is synthetically created by rendering the scene of suzanne. Therefore I tried to exchange this reference with my own .jpg image of the face of a person. When running the optimization the background of the image becomes part of the 3D mesh. I therefore tried to remove the background and turning the image into a .png this seemingly helped but upon closer inspection of the 3D mesh the optimization started creating a false background shadow. I am therefore wondering what needs to be done to a .jpg image in order to get the same behaviour as the suzanne example where the background is distinct and unaffected by the optimization. Grateful for any replies that may help directly or pinpoint me to other resources that may help me in achieving the goal of creating a 3D mesh of a face based on images. |
Beta Was this translation helpful? Give feedback.
-
Thank you all for the amazing sharings.
nanobind: leaked 4 instances!
reparam.mp4There may be some memory leak and the mesh also shows big holes. Have you ever encountered similar problems, or could you give me some tips to solve the problems? |
Beta Was this translation helpful? Give feedback.
-
Hi,
As I did not find any examples of how to optimize the geometry of a mesh, I decided to integrate the reparametrization from
I integrated the reparametrization from the paper Large Steps in Inverse
Rendering of Geometry in
Mitsuba3 and I thought this might be interesting to some.
Setup
As in the tutorials we need to import
mitsuba
anddrjit
and set thevariant.
In addition, we also need trimesh for creating an icosphere mesh and NumPy for converting between trimesh and Mitsuba.
The largesteps package is also a necessary requirement though we are going to
import it in the helper functions to avoid name conflicts.
Finally, PyTorch is necessary to perform the matrix multiplications in the largesteps package.
Helper Functions for Largesteps
To make the use of the largesteps package easier we can implement some helper functions.
First, the construction of the parameterization matrix (which is calculated
from the Laplacian matrix of the mesh) does not require gradients, and so we
can use a PyTorch tensor directly.
The
.torch()
can be used to convert Mitsuba tensors to PyTorch tensors,however the faces and positions have to be converted to tensors.
Since the positions and faces are stored in a flat array we can construct the
Mitsuba tensors in the following way.
The helper function for computing the parameterization matrix then looks like
this.
The "from_differential" and "to_differential" functions on the other hand
require gradient backpropagation to be enabled, since the gradients are
backpropagated to the reparameterized coordinates.
Using the
wrap_ad
method we can integrate PyTorch operations into Dr.Jit.Since we want to use PyTorch tensors in the wrapped functions we use "torch" as
the target argument and "drjit" as the source argument.
Rendering Reference Images
We can construct a scene from which we can render reference images.
In this case one reference image is sufficient to get achieve reconstruction of
the front side of the mesh.
I used a model of suzanne (the Blender monkey).
Initialization
In order to perform reconstruction, initial mesh is required.
I decided to use an icosphere which has vertices distributed more uniformly than a UV sphere.
One limitation at the moment is that texture coordinates are stored at the
vertices, therefore loading OBJ or PLY file with texture coordinates would result in multiple positions per vertex.
When optimizing, the faces can then disconnect at the vertices and result in non-manifold geometry.
To construct the sphere without texture coordinates I used the
trimesh package.
An alternative would be to use Multi View Geometry (MVG) to generate the initial geometry.
In order to construct a Mitsuba mesh another helper function can be used that
converts a trimesh mesh to a Mitsuba mesh.
The mesh construction is derived from the example in the Mesh I/O and
manipulation
Tutorial.
Now we can construct a scene for optimizing the mesh.
This can be done by setting the mesh to the initial mesh, in this case an
icosphere with 4 subdivisions.
Parameter Optimization
We can extract the parameters of the scene using
mi.traverse
and print them.Then, the parameterization matrix "M" and can be constructed by utilizing the
helper function defined above.
As these helper functions handle the conversion to tensors of shape [N, 3] we
can pass the flattened positions and faces directly to them.
The reparameterized coordinates "u" are then constructed from the
to_differential
function.Then we can construct the optimizer using the latent variable "u" for the
reparameterized coordinates.
In this case we use the Adam optimizer with a learning rate of 0.01.
Finally, in the optimization loop we need to calculate the "vertex_positions"
from the reparameterized coordinates "u" before updating the parameters.
The rest of the optimization loop is the similar to other optimization tasks.
We run the optimization for 200 iterations which took about 2 minutes on a
RTX3070 laptop graphics card.
Visualizing the Results
Reference image:
Using
ffmpeg -i %d.jpg -vcodec libx264 -acodec mp2 -pix_fmt yuv420p reparam.mp4
in theout
directory it is possibleto generate a video from the image sequence.
reparam.mp4
With matplotlib it is also possible to plot the loss per iteration.
Beta Was this translation helpful? Give feedback.
All reactions