Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[❔ other question] Fully-connected NN giving RGB value as input to diff. rendering #563

Open
olaals opened this issue Jan 21, 2022 · 0 comments

Comments

@olaals
Copy link

olaals commented Jan 21, 2022

Summary

I am trying to figure out how to sandwich the rendering process in a neural network, and have a question about how this works.

System configuration

  • Platform: Ubuntu 20.04, RTX3090
  • Compiler: clang
  • Python version: 3.8.8
  • Mitsuba 2 version: commit 858509e, Date: Wed Sep 8 09:37:16 2021 +0200
  • Compiled variants:
    • gpu_autodiff_rgb

Description

I have started with some simple testing making a small fully connected network give input to one of the object's material RGB value. However, I can not figure out from the documentation, other issues, or other code on GitHub how to do this. Would appreciate if someone could give me some directions.

Anyway, great work with Mitsuba 2! Really appreciate your work.

Here is what I have tried so far, which gives the error

  File "cbox_torch.py", line 48, in <module>
    opt = torch.optim.Adam(params_torch.values(), lr=.03)
  File "/home/ola/library/anaconda3/envs/ai/lib/python3.8/site-packages/torch/optim/adam.py", line 74, in __init__
    super(Adam, self).__init__(params, defaults)
  File "/home/ola/library/anaconda3/envs/ai/lib/python3.8/site-packages/torch/optim/optimizer.py", line 54, in __init__
    self.add_param_group(param_group)
  File "/home/ola/library/anaconda3/envs/ai/lib/python3.8/site-packages/torch/optim/optimizer.py", line 258, in add_param_group
    raise ValueError("can't optimize a non-leaf Tensor")
ValueError: can't optimize a non-leaf Tensor
import enoki as ek
import mitsuba
mitsuba.set_variant('gpu_autodiff_rgb')

from mitsuba.core import Thread, Vector3f
from mitsuba.core.xml import load_file
from mitsuba.python.util import traverse
from mitsuba.python.autodiff import render_torch, write_bitmap
import torch
import time

import numpy as np

from scene import scene
from pynet import net 

#ek.cuda_set_log_level(3)

diff_var = 'OBJMesh.bsdf.reflectance.value'


# Find differentiable scene parameters
params = traverse(scene)
print(params)

# Discard all parameters except for one we want to differentiate
params.keep([diff_var])
params.update()


# Render a reference image (no derivatives used yet)
image_ref = render_torch(scene, spp=8)
crop_size = scene.sensors()[0].film().crop_size()
write_bitmap('out_ref.png', image_ref, crop_size)


# Which parameters should be exposed to the PyTorch optimizer?
tens = torch.Tensor([1,1,1]).cuda()
net = net.cuda() # fully connected 3->3->3
params_torch = params.torch()
params_torch[dfif_var] = net(tens)
params_torch.update()

opt = torch.optim.Adam(params_torch.values(), lr=.03)

objective = torch.nn.MSELoss()

time_a = time.time()

iterations = 100
for it in range(iterations):
    # Zero out gradients before each iteration
    opt.zero_grad()

    # Perform a differentiable rendering of the scene
    image = render_torch(scene, params=params, unbiased=True,
                         spp=1, **params_torch)

    write_bitmap('out_%03i.png' % it, image, crop_size)

    # Objective: MSE between 'image' and 'image_ref'
    ob_val = objective(image, image_ref)

    # Back-propagate errors to input parameters
    ob_val.backward()

    # Optimizer: take a gradient step
    opt.step()

    # Compare iterate against ground-truth value
    err_ref = objective(params_torch[diff_var], param_ref)
    print('Iteration %03i: error=%g' % (it, err_ref), end='\r')
    break

time_b = time.time()

print()
print('%f ms per iteration' % (((time_b - time_a) * 1000) / iterations))
pynet.py
import torch
import torch.nn as nn
import torch.nn.functional as F


class SimpleNet(nn.Module):

    def __init__(self):
        super(SimpleNet, self).__init__()

        self.fc1 = nn.Linear(3, 3) 
        self.fc2 = nn.Linear(3, 3)
        self.fc3 = nn.Linear(3, 3)
        # add sigmoid
        self.sigmoid = nn.Sigmoid()


    def forward(self, x):
        x = self.fc1(x)
        x = self.fc2(x)
        x = self.fc3(x)
        x = self.sigmoid(x)
        return x


net = SimpleNet()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant