You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have started with some simple testing making a small fully connected network give input to one of the object's material RGB value. However, I can not figure out from the documentation, other issues, or other code on GitHub how to do this. Would appreciate if someone could give me some directions.
Anyway, great work with Mitsuba 2! Really appreciate your work.
Here is what I have tried so far, which gives the error
File "cbox_torch.py", line 48, in<module>
opt = torch.optim.Adam(params_torch.values(), lr=.03)
File "/home/ola/library/anaconda3/envs/ai/lib/python3.8/site-packages/torch/optim/adam.py", line 74, in __init__
super(Adam, self).__init__(params, defaults)
File "/home/ola/library/anaconda3/envs/ai/lib/python3.8/site-packages/torch/optim/optimizer.py", line 54, in __init__
self.add_param_group(param_group)
File "/home/ola/library/anaconda3/envs/ai/lib/python3.8/site-packages/torch/optim/optimizer.py", line 258, in add_param_group
raise ValueError("can't optimize a non-leaf Tensor")
ValueError: can't optimize a non-leaf Tensor
importenokiasekimportmitsubamitsuba.set_variant('gpu_autodiff_rgb')
frommitsuba.coreimportThread, Vector3ffrommitsuba.core.xmlimportload_filefrommitsuba.python.utilimporttraversefrommitsuba.python.autodiffimportrender_torch, write_bitmapimporttorchimporttimeimportnumpyasnpfromsceneimportscenefrompynetimportnet#ek.cuda_set_log_level(3)diff_var='OBJMesh.bsdf.reflectance.value'# Find differentiable scene parametersparams=traverse(scene)
print(params)
# Discard all parameters except for one we want to differentiateparams.keep([diff_var])
params.update()
# Render a reference image (no derivatives used yet)image_ref=render_torch(scene, spp=8)
crop_size=scene.sensors()[0].film().crop_size()
write_bitmap('out_ref.png', image_ref, crop_size)
# Which parameters should be exposed to the PyTorch optimizer?tens=torch.Tensor([1,1,1]).cuda()
net=net.cuda() # fully connected 3->3->3params_torch=params.torch()
params_torch[dfif_var] =net(tens)
params_torch.update()
opt=torch.optim.Adam(params_torch.values(), lr=.03)
objective=torch.nn.MSELoss()
time_a=time.time()
iterations=100foritinrange(iterations):
# Zero out gradients before each iterationopt.zero_grad()
# Perform a differentiable rendering of the sceneimage=render_torch(scene, params=params, unbiased=True,
spp=1, **params_torch)
write_bitmap('out_%03i.png'%it, image, crop_size)
# Objective: MSE between 'image' and 'image_ref'ob_val=objective(image, image_ref)
# Back-propagate errors to input parametersob_val.backward()
# Optimizer: take a gradient stepopt.step()
# Compare iterate against ground-truth valueerr_ref=objective(params_torch[diff_var], param_ref)
print('Iteration %03i: error=%g'% (it, err_ref), end='\r')
breaktime_b=time.time()
print()
print('%f ms per iteration'% (((time_b-time_a) *1000) /iterations))
Summary
I am trying to figure out how to sandwich the rendering process in a neural network, and have a question about how this works.
System configuration
gpu_autodiff_rgb
Description
I have started with some simple testing making a small fully connected network give input to one of the object's material RGB value. However, I can not figure out from the documentation, other issues, or other code on GitHub how to do this. Would appreciate if someone could give me some directions.
Anyway, great work with Mitsuba 2! Really appreciate your work.
Here is what I have tried so far, which gives the error
The text was updated successfully, but these errors were encountered: