Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: No grad accumulator for a saved leaf! #16

Open
diyiiyiii opened this issue Oct 21, 2022 · 2 comments
Open

RuntimeError: No grad accumulator for a saved leaf! #16

diyiiyiii opened this issue Oct 21, 2022 · 2 comments

Comments

@diyiiyiii
Copy link

Thanks for your excellent work! I meet an error about backward grad issue, does this project support back propagation operation?

@aliutkus
Copy link
Owner

Hi ! Normally this should, have you pinned thé problem down? Is there an error message?

@pwangcs
Copy link

pwangcs commented Oct 24, 2022

Hi ! Normally this should, have you pinned thé problem down? Is there an error message?

Hi, @aliutkus, I got a same error when I use interp1d to train a network. The error message is as follows,

Traceback (most recent call last):
File "/home/wangping/Codes/DeepOpticsSCI/E2E_train.py", line 296, in
main(E2Enet, optimizer, args)
File "/home/wangping/Codes/DeepOpticsSCI/E2E_train.py", line 260, in main
train(epoch, result_path, model, optimizer, logger, args)
File "/home/wangping/Codes/DeepOpticsSCI/E2E_train.py", line 131, in train
Loss_all.backward()
File "/home/wangping/anaconda3/envs/sci_pytorch/lib/python3.9/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/wangping/anaconda3/envs/sci_pytorch/lib/python3.9/site-packages/torch/autograd/init.py", line 145, in backward
Variable._execution_engine.run_backward(
File "/home/wangping/anaconda3/envs/sci_pytorch/lib/python3.9/site-packages/torch/autograd/function.py", line 89, in apply
return self._forward_cls.backward(self, *args) # type: ignore
File "/home/wangping/Codes/DeepOpticsSCI/utils.py", line 159, in backward
inputs = ctx.saved_tensors[1:]
RuntimeError: No grad accumulator for a saved leaf!

In my utils.py, code related with interp1d is as

def crf_3d(x): # x : a torch.Tensor with size [batch, H, W]
batch, H, W = x.size()
x = x.view(batch*H, W)
E = torch.linspace(0.0, 1.0, steps= 1024, requires_grad=False).to(x.device)
f0 = parse_emor() # f0 is a numpy.array loaded from the local .txt file
I = torch.from_numpy(f0).to(x.device)
y = interp1d(E, I, x)
y = y.view(batch, H, W)
return y

My codes run under pytorch=1.8.1, Python=3.9.4. Please let me know what's wrong? Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants