You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thanks for this package that is very useful.
I have a usecase in which I would like to optimize over positions on the output grid of interp1d (i.e. xnew).
Here is a short example to figure out what I want to do:
importtorchfromtorch.autogradimportgradfromtorchinterp1dimportinterp1dtorch.manual_seed(0)
n=20original_grid=torch.linspace(0, torch.pi, n)
x=torch.cos(original_grid)
indices=torch.arange(0, n, step=4)
inputs=x[indices]
grid_to_be_optimized=torch.Tensor([0.1, 0.5, 0.7, 0.9, 1.1])
grid_to_be_optimized.requires_grad_()
# At convergence, we hope to get # grid_to_be_optimized ~= original_grid[indices]loss_fn=torch.nn.MSELoss()
n_steps=100for_inrange(n_steps):
outputs=interp1d(original_grid, x, grid_to_be_optimized)
loss=loss_fn(inputs, outputs)
grad(loss, [grid_to_be_optimized], allow_unused=True)
My goal would be to optimize positions in grid_to_be_optimized via gradient descent, but computation of the gradient fails with:
/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/nn/modules/loss.py:535: UserWarning: Using a target size (torch.Size([1, 5])) that is different to the input size (torch.Size([5])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
Traceback (most recent call last):
File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/tests.py", line 24, in <module>
grad(loss, [grid_to_be_optimized], allow_unused=True)
File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 412, in grad
result = _engine_run_backward(
File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/function.py", line 301, in apply
return user_fn(self, *args)
File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torchinterp1d/interp1d.py", line 155, in backward
gradients = torch.autograd.grad(
File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 412, in grad
result = _engine_run_backward(
File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
My package versions are:
% pip show torch
Name: torch
Version: 2.3.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3
Location: /Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages
Requires: filelock, fsspec, jinja2, networkx, sympy, typing-extensions
Required-by: torchinterp1d
% pip show torchinterp1d
Name: torchinterp1d
Version: 1.1
Summary: An interp1d implementation for pytorch
Home-page: https://github.com/aliutkus/torchinterp1d
Author: Antoine Liutkus
Author-email: [email protected]
License:
Location: /Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages
Requires: torch
Required-by:
I'd be happy to give a hand, but I have no idea where to start, to be honest...
The text was updated successfully, but these errors were encountered:
rtavenar
changed the title
Error in the gradioent computation: RuntimeError: One of the differentiated Tensors appears to not have been used in the graph.
Error in the gradient computation: RuntimeError: One of the differentiated Tensors appears to not have been used in the graph.
Jun 7, 2024
Not sure if it helps, but if I turn the torch.autograd.Function into a standard function (and hence remove code for the backward pass), everything works smoothly.
@rtavenar: I have a similar problem, can you explain what you actually mean by turning torch.autograd.Function into a standard function? So what did you change in which code?
@jduerholt You can create a function called interp1d and copy the contents of forward pass. Pytorch autograd automatically handles the backward pass. This fixes the error.
Hi,
First of all, thanks for this package that is very useful.
I have a usecase in which I would like to optimize over positions on the output grid of
interp1d
(i.e.xnew
).Here is a short example to figure out what I want to do:
My goal would be to optimize positions in
grid_to_be_optimized
via gradient descent, but computation of the gradient fails with:My package versions are:
I'd be happy to give a hand, but I have no idea where to start, to be honest...
The text was updated successfully, but these errors were encountered: