Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raise when in place operations occur on leafs requiring grad #1458

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
19 changes: 7 additions & 12 deletions thunder/executors/torchex.py
Original file line number Diff line number Diff line change
@@ -1,32 +1,24 @@
from __future__ import annotations
import operator
import importlib
from dataclasses import replace
from contextlib import ContextDecorator
from functools import wraps, partial
from inspect import signature
from itertools import groupby
from functools import partial, wraps
from numbers import Number
from typing import TYPE_CHECKING
from collections.abc import Callable
from collections.abc import Hashable, Sequence
from collections.abc import Sequence
from types import ModuleType
from enum import Enum, auto

import torch
import math
from looseversion import LooseVersion

from thunder.core.compile_data import get_compile_data
import thunder.core.dtypes as dtypes
from thunder.core.dtypes import to_torch_dtype, to_dtype
import thunder.core.devices as devices
from thunder.core.devices import to_torch_device, to_device
import thunder.core.prims as prims
from thunder.core.trace import TraceCtx, set_tracectx, reset_tracectx, from_trace
from thunder.core.proxies import NumberProxy, TensorProxy, FutureTensorProxy, variableify, pytype
from thunder.core.pytree import tree_flatten, tree_unflatten
from thunder.core.symbol import Symbol, BoundSymbol
from thunder.core.proxies import NumberProxy, TensorProxy, FutureTensorProxy, pytype
from thunder.core.symbol import Symbol
from thunder.distributed.prims import DistributedReduceOps
import thunder.distributed.prims as dist_prims
import thunder.core.utils as utils
Expand Down Expand Up @@ -2190,6 +2182,9 @@ def is_float_type(self, input):


def _copy__impl(copy_from, copy_to):
cd = get_compile_data()
if cd is not None and cd.is_grad_enabled and copy_to.is_leaf and copy_to.requires_grad:
raise RuntimeError("a leaf Variable that requires grad is being used in an in-place operation.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering if Symbol copy_ in thunder/torch/__init__.py is more appropriate location for the check.

@torchsymbol(torch.Tensor.copy_, is_method=True) # , tags=(prims.OpTags.IN_PLACE,))
def copy_(a, b, /):
return prims.copy_(b, a)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a and b are proxies and it it not clear to me if a proxy knows that it is a leaf.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They do not. It's only a PyTorch concept that's available at runtime inside _copy__impl.

Copy link
Collaborator

@kshitij12345 kshitij12345 Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, previously I missed that the fix was in copy_impl. And since, it is happening at runtime, I am wondering if compile_data is actually available.

Quick test shows (see below) that it wouldn't be. So, we probably need a way to check if this copy was called under no_grad in users code (as PyTorch supports inplace of leaf tensors under no_grad, see comment).

Snippet to check if compile_data is available -

import torch
import thunder
from thunder.extend import OperatorExecutor
from thunder.core.compile_data import get_compile_data
from thunder.core.proxies import TensorProxy

ex = OperatorExecutor("ex")

def clone_impl(x):
    cd = get_compile_data()
    print(cd)  # None
    return x

clone = ex.register_operator("clone", meta=lambda x: TensorProxy(like=x), fn=clone_impl)

def fn(x):
    return clone(x)

x = torch.ones(3)

jfn = thunder.jit(fn)

jfn(x)
exec_trace = thunder.last_traces(jfn)[-1]
# print(exec_trace)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, compile_data was not available, but now it should be with the added context manager in thunder/init.py

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is still incorrect because as discussed in #1486, the value of compile_data.is_grad_enabled here would be that of last updated state which can lead to incorrectness when used outside of tracing context.

We can see the discrepancy here.

import torch
import thunder

x = torch.randn(3, 3, requires_grad=True)

@torch.no_grad
def fn(x):
  return x.add_(1)

fn(x)  # This works

thunder.jit(fn)(x)  # This raises error

So, whether the copy is in no_grad region needs to be captured during the tracing time.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this is why I created the other issue. This PR fixes the leaf/grad issue when there is no annotation. When there is an annotation, another approach is required. This other approach may or may not involve using compile data in _copy__impl.

As far as I understand, compile data is the medium for passing around data such as whether grad is enabled. But as the other issue points out, compile data reflects the end state of a function call and not the "live" state, at least at the time it reaches _copy__impl. So I'm left with the questions "are there other mechanisms for passing around whether grad is enabled?" "where else in the execution is it simultaneously knowable that a (1) leaf tensor (2) requiring grad is being (3) copied when (4) grad is enabled?" "is it feasible/desirable to make the compile data more dynamic?" "is there a way to context-manage the tensors so that their requires_grad flags are set to False when the interpreter sees torch._C._set_grad_enabled(False), and then later restored, thereby obviating the need for the compile data for this check?" Do you have suggestions for a fix that addresses both issues? Or can we close out this issue and move the discussion to the more involved issue?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So to tackle - leaf tensor requiring grad being copied into when grad is enabled, I think similar to a previous commit,
we can update prims.copy to take a argument is_grad_enabled. With this, ltorch.copy will query cd.is_grad_enabled and call prims.copy by also passing this argument.

@torchsymbol(torch.Tensor.copy_, is_method=True) # , tags=(prims.OpTags.IN_PLACE,))
def copy_(a, b, /):
return prims.copy_(b, a)

With these changes, the copy_impl's signature will also change to accept is_grad_enabled and it will be called at runtime with a tensor which we can query if it is a leaf and also whether grad was enabled or not when calling that particular copy. Wdyt @beverlylytle?

Though, I am curious if there is another approach to this - cc: @IvanYashchuk

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's see what the CI thinks.

copy_to.copy_(copy_from)
return copy_to

Expand Down
8 changes: 8 additions & 0 deletions thunder/tests/test_inplace_copy.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,3 +178,11 @@ def func(T0):
assert_close(a_ref, a)
for o, o_ref in zip(o_thunder, o_eager):
assert_close(o, o_ref)


@instantiate(dtypes=datatypes.float_math_dtypes)
def test_inplace_copy_of_leaf_requiring_grad_fails(executor, device, dtype):
tdtype = ttorch.to_torch_dtype(dtype)
a = make_tensor((4, 4), device=device, dtype=tdtype, requires_grad=True)
with pytest.raises(RuntimeError):
a.copy_(a)
kshitij12345 marked this conversation as resolved.
Show resolved Hide resolved
39 changes: 18 additions & 21 deletions thunder/tests/test_inplace_functionalization.py
Original file line number Diff line number Diff line change
Expand Up @@ -476,31 +476,27 @@ def f(xs, ys, z):
dtypes=NOTHING,
)
def test_inplace_to_tensors_with_grad(executor, device, _):
@torch.no_grad
beverlylytle marked this conversation as resolved.
Show resolved Hide resolved
def add_y(x, y):
x.add_(y, alpha=0.1)
# inplace operations requiring grad on leafs are illegal, trick to make z a non-leaf
z = torch.abs(x) * torch.sgn(x)
z.add_(y, alpha=0.1)

@torch.no_grad
def add_grad(x, y):
x.add_(x.grad, alpha=0.1)
jitted_f = executor.make_callable(add_y)
x = make_tensor((2, 2), device=device, dtype=torch.float32, requires_grad=True)
x.grad = make_tensor((2, 2), device=device, dtype=torch.float32)
y = make_tensor((2, 2), device=device, dtype=torch.float32)

for f in (add_y, add_grad):
jitted_f = executor.make_callable(f)
x = make_tensor((2, 2), device=device, dtype=torch.float32, requires_grad=True)
x.grad = make_tensor((2, 2), device=device, dtype=torch.float32)
y = make_tensor((2, 2), device=device, dtype=torch.float32)
x_ref = x.clone().detach().requires_grad_(True)
x_ref.grad = x.grad.clone().detach()
y_ref = y.clone().detach()

x_ref = x.clone().detach().requires_grad_(True)
x_ref.grad = x.grad.clone().detach()
y_ref = y.clone().detach()
res = jitted_f(x, y)
res_ref = add_y(x_ref, y_ref)

res = jitted_f(x, y)
res_ref = f(x_ref, y_ref)

torch.testing.assert_close(x, x_ref)
torch.testing.assert_close(x.grad, x_ref.grad)
torch.testing.assert_close(y, y_ref)
torch.testing.assert_close(res, res_ref)
torch.testing.assert_close(x, x_ref)
torch.testing.assert_close(x.grad, x_ref.grad)
torch.testing.assert_close(y, y_ref)
torch.testing.assert_close(res, res_ref)


@instantiate(
Expand Down Expand Up @@ -549,7 +545,8 @@ def single_tensor_adam(
ref_state_steps = [torch.tensor(1, device=device) for _ in range(2)]
single_tensor_adam(*ref_tensors, state_steps=ref_state_steps)

jitted = executor.make_callable(single_tensor_adam)
# torch.compile does not support accessing the ContextVariable compile data used in _copy__impl_
jitted = executor.make_callable(single_tensor_adam, torch_compile_fullgraph=False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting that torch.compile creates a graph break when calling get on ContextVariable.

import torch
from contextvars import ContextVar

_compile_data = ContextVar("compile_data", default=(None, None))

def fn(x):
    _compile_data.get()
    return x + 1

torch.compile(fn, fullgraph=False)(torch.randn(3, 3))  # Works with GraphBreak at _compile_data.get()
torch.compile(fn, fullgraph=True)(torch.randn(3, 3))  # Fails

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does Thunder's Interpreter do? It probably fails

Copy link
Collaborator

@kshitij12345 kshitij12345 Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thunder just burns the value in computation trace (if used) without having a corresponding check in prologue. (Will file an issue for the same).

Eg.

import torch
import thunder
from contextvars import ContextVar

_compile_data = ContextVar("compile_data", default=1)

def fn(x):
    v = _compile_data.get()
    return x + v

jfn = thunder.jit(fn)
o = jfn(torch.ones(3,))
print(o)  # tensor([2., 2., 2.])

_compile_data.set((2,))
o = jfn(torch.ones(3,))
print(o)  # tensor([2., 2., 2.])

print(thunder.last_prologue_traces(jfn)[-1])
# @torch.no_grad()
# @no_autocast
# def prologue(*args, **kwargs):
#   # args: "Any"
#   check_len(args, 1)
#     # prims.check_len(args, 1)
#   # kwargs: "Any"
#   check_len(kwargs, 0)
#     # prims.check_len(kwargs, 0)
#   x: "cpu f32[3]" = args[0]
#   check_tensor_metadata(x, (3,), 'cpu', torch.float32, False)
#     # prims.check_tensor_shape_and_metadata(x, (3,), 'cpu', torch.float32, False)
#   cache_info: "Any" = thunder._get_cache_info()
#   cache_info_default_dtype: "<class 'torch.dtype'>" = cache_info['default_dtype']
#   check_literal_like(cache_info_default_dtype, torch.float32)
#     # prims.check_literal_like(cache_info_default_dtype, torch.float32)
#   cache_info_default_device: "<class 'torch.device'>" = cache_info['default_device']
#   check_literal_like(cache_info_default_device, torch.device("cpu"))
#     # prims.check_literal_like(cache_info_default_device, torch.device("cpu"))
#   cache_info_is_autocast_enabled: "bool False" = cache_info['is_autocast_enabled']
#   check_number_type_and_value(cache_info_is_autocast_enabled, False)
#     # prims.check_number_type_and_value(cache_info_is_autocast_enabled, False)
#   cache_info_no_grad_sync: "bool False" = cache_info['no_grad_sync']
#   check_number_type_and_value(cache_info_no_grad_sync, False)
#     # prims.check_number_type_and_value(cache_info_no_grad_sync, False)
#   cache_info_alias_tensor_indices: "str" = cache_info['alias_tensor_indices']
#   check_string_value(cache_info_alias_tensor_indices, '')
#     # prims.check_string_value(cache_info_alias_tensor_indices, '')
#   cache_info_is_grad_enabled: "bool True" = cache_info['is_grad_enabled']
#   check_number_type_and_value(cache_info_is_grad_enabled, True)
#     # prims.check_number_type_and_value(cache_info_is_grad_enabled, True)
#   return ((x,), ())

print(thunder.last_traces(jfn)[-1])
# @torch.no_grad()
# @no_autocast
# def computation(x):
#   # x: "cpu f32[3]"
#   t0 = torch.add(x, 1, alpha=1)  # t0: "cpu f32[3]"
#     # t0 = ltorch.add(x, 1, alpha=1)  # t0: "cpu f32[3]"
#       # _ = prims.convert_element_type(1, float)
#       # t0 = prims.add(x, 1.0)  # t0: "cpu f32[3]"
#   return t0

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue filed at #1464

params, grads, exp_avgs, exp_avg_sqs = tensors

jitted(params, grads, exp_avgs, exp_avg_sqs, state_steps)
Expand Down
Loading