Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with tensor.numpy() for wrapped tensors #626

Open
vfdev-5 opened this issue Mar 28, 2022 · 0 comments · May be fixed by #627
Open

Issue with tensor.numpy() for wrapped tensors #626

vfdev-5 opened this issue Mar 28, 2022 · 0 comments · May be fixed by #627
Assignees
Labels
actionable It is clear what should be done for this issue bug Something isn't working

Comments

@vfdev-5
Copy link
Contributor

vfdev-5 commented Mar 28, 2022

Calling .numpy() on wrapped tensors, e.g. GradTrackingTensor, BatchedTensor

RuntimeError: Cannot access data pointer of Tensor that doesn't have storage

How to reproduce

import torch
import functorch as ft

def foo(t):
    tt = t.detach()
    n = tt.numpy()
    return t

x = torch.rand(4, 3)
out = ft.grad(foo)(x)
# or
# out = ft.vmap(foo)(x)

Context: discovered when benchmarking functorch transforms on detr: https://github.com/pytorch/pytorch/blob/58f78ff4e08a6d6a1fc0844dd19bb92fb139bbac/benchmarks/functional_autograd_benchmark/torchvision_models.py#L802-L803

EDIT:

Monkey patching like below could fix the problem similarly to repr

# Monkeypatch .numpy() to fetch underlying tensor and call .numpy()
_old_numpy = torch.Tensor.numpy


@functools.wraps(_old_numpy)
def _numpy(tensor):
    level = _C.maybe_get_level(tensor)
    if level == -1:
        return _old_numpy(tensor)

    if _C.is_functionaltensor(tensor):
        # Since we're unwrapping the FunctionalTensorWrapper, we need to make sure
        # that it's up to date first
        torch._sync(tensor)

    value = _C.get_unwrapped(tensor)
    dl_enabled = _C.tls_set_is_included()
    try:
        # Disable temporarily kDynamicLayerFrontModeKey/kDynamicLayerBackModeKey as included dispatch keys
        if (dl_enabled):
            _C._set_dynamic_layer_keys_included(False)
        return value.numpy()
    finally:
        # Reenable kDynamicLayerFrontModeKey/kDynamicLayerBackModeKey as included dispatch keys
        if (dl_enabled):
            _C._set_dynamic_layer_keys_included(True)


setattr(torch.Tensor, 'numpy', _numpy)

In case of vmap, obtained ndarray is batched and not a slice without batch dimension:

import torch
import functorch as ft

def foo(t):
    n = t.numpy()
    assert n.shape == (4, 3)
    assert n.shape != (3, )
    return t

x = torch.rand(4, 3)
out = ft.vmap(foo)(x)
@vfdev-5 vfdev-5 added bug Something isn't working actionable It is clear what should be done for this issue labels Mar 28, 2022
@vfdev-5 vfdev-5 self-assigned this Mar 29, 2022
vfdev-5 added a commit to vfdev-5/functorch that referenced this issue Mar 29, 2022
Fixes pytorch#626

Description:
- Fixing tensor.numpy on wrapped tensors
@vfdev-5 vfdev-5 linked a pull request Mar 29, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
actionable It is clear what should be done for this issue bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant