Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

floats with integer tensors not working with __r<op>__s #154

Closed
Mgluhovskoi opened this issue Aug 28, 2024 · 1 comment
Closed

floats with integer tensors not working with __r<op>__s #154

Mgluhovskoi opened this issue Aug 28, 2024 · 1 comment
Labels
tripy Pull request for the tripy project

Comments

@Mgluhovskoi
Copy link
Collaborator

Came across this bug after rebasing recently and running the test cases from my PR #25. This bug results in allclose not being able to accept any integer tensors. When performing any __r<op>__ with a float scalar and an integer tensor you get the following error:

>>> b = tp.Tensor(1,tp.int32)
>>> b+1.1
==== Trace IR ====
t75 = storage(data=1, shape=(), dtype=int32, device=gpu:0)
t76 = storage(data=1.1, shape=(), dtype=int32, device=gpu:0)
t77 = t75 + t76
outputs:
    t77: [rank=(0), dtype=(int32), loc=(gpu:0)]

==== Flat IR ====
t75: [rank=(0), shape=(()), dtype=(int32), loc=(gpu:0)] = ConstantOp(data=1)
t76: [rank=(0), shape=(()), dtype=(int32), loc=(gpu:0)] = ConstantOp(data=1.1)
t_inter6: [rank=(1), shape=(()), dtype=(int32), loc=(gpu:0)] = ConstantOp(data=<mlir_tensorrt.runtime._mlir_libs._api.MemRefValue object at 0x74a500161c30>)
t_inter7: [rank=(1), shape=([0]), dtype=(int32), loc=(<class 'tripy.common.device.device'>)] = ConstantOp(data=<mlir_tensorrt.runtime._mlir_libs._api.MemRefValue object at 0x74a500160530>)
t_inter5: [rank=(1), shape=([0]), dtype=(bool), loc=(gpu:0)] = CompareOp.EQ(t_inter6, t_inter7, compare_direction='EQ')
t_inter8: [rank=(1), shape=(()), dtype=(int32), loc=(gpu:0)] = ConstantOp(data=<mlir_tensorrt.runtime._mlir_libs._api.MemRefValue object at 0x74a5001613b0>)
t_inter4: [rank=(1), shape=([0]), dtype=(int32), loc=(gpu:0)] = SelectOp(t_inter5, t_inter8, t_inter6)
t_inter3: [rank=(0), dtype=(int32), loc=(gpu:0)] = DynamicBroadcastOp(t75, t_inter4, broadcast_dim=[])
t_inter9: [rank=(0), dtype=(int32), loc=(gpu:0)] = DynamicBroadcastOp(t76, t_inter4, broadcast_dim=[])
t77: [rank=(0), dtype=(int32), loc=(gpu:0)] = AddOp(t_inter3, t_inter9)
outputs:
    t77: [rank=(0), dtype=(int32), loc=(gpu:0)]

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/tripy/tripy/frontend/tensor.py", line 219, in __repr__
    arr = self.eval()
  File "/tripy/tripy/frontend/tensor.py", line 183, in eval
    mlir = flat_ir.to_mlir()
  File "/tripy/tripy/flat_ir/flat_ir.py", line 153, in to_mlir
    map_error_to_user_code_and_raise(self, exc, stderr.decode())
  File "/tripy/tripy/backend/mlir/utils.py", line 458, in map_error_to_user_code_and_raise
    raise_error(
  File "/tripy/tripy/common/exception.py", line 195, in raise_error
    raise TripyException(msg) from None
tripy.common.exception.TripyException: 

--> <stdin>:1 in <module>()

TypeError: get(): incompatible function arguments. The following argument types are supported:
    1. (type: mlir_tensorrt.compiler._mlir_libs._mlir.ir.Type, value: int) -> mlir_tensorrt.compiler._mlir_libs._mlir.ir.IntegerAttr

Invoked with: IntegerType(i32), 1.1

Additional context:
Traceback (most recent call last):
  File "/tripy/tripy/flat_ir/flat_ir.py", line 145, in to_mlir
    mlir = to_mlir_impl()
  File "/tripy/tripy/flat_ir/flat_ir.py", line 84, in to_mlir_impl
    layer_outputs = op.to_mlir(layer_inputs)
  File "/tripy/tripy/flat_ir/ops/constant.py", line 89, in to_mlir
    attr = ir.DenseElementsAttr.get(attrs=to_attrs(self.data), type=self.outputs[0].to_mlir())
  File "/tripy/tripy/flat_ir/ops/constant.py", line 83, in to_attrs
    return [mlir_utils.get_mlir_scalar_attr(out_dtype, data)]
  File "/tripy/tripy/backend/mlir/utils.py", line 89, in get_mlir_scalar_attr
    return attr_func(get_mlir_dtype(dtype), value)
TypeError: get(): incompatible function arguments. The following argument types are supported:
    1. (type: mlir_tensorrt.compiler._mlir_libs._mlir.ir.Type, value: int) -> mlir_tensorrt.compiler._mlir_libs._mlir.ir.IntegerAttr

Invoked with: IntegerType(i32), 1.1

This occurs because of the following TraceIR: t76 = storage(data=1.1, shape=(), dtype=int32, device=gpu:0). This TraceIR is generated due to each __r<op>__ having the decorator: @frontend_utils.convert_inputs_to_tensors(sync_arg_types=[("self", "other")]) which will automatically create an int32 tensor from a float scalar eventually leading to an error.

Possible solutions:

  1. If this type of operation is desired then we can upscale within __r<op>__ to float when scalar is a float and tensor is an int.
  2. If this type of operation is not desired then can add some more checks that will throw helpful exceptions from __r<op>__ and will have to upscale within allclose since without this behavior allclose cannot handle any integer tensors.
@parthchadha
Copy link
Collaborator

The error is self explanatory now:

Refusing to automatically cast 'float32' argument to 'int32'.
    Hint: You may want to manually cast other operands in this expression to compatible types.
    Note: argument was: 

    --> <stdin>:1 in <module>()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tripy Pull request for the tripy project
Projects
None yet
Development

No branches or pull requests

3 participants