Skip to content
This repository has been archived by the owner on Nov 7, 2024. It is now read-only.

Added power function to backend to new backends #889

Open
wants to merge 16 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions tensornetwork/backends/pytorch/pytorch_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -475,3 +475,17 @@ def sign(self, tensor: Tensor) -> Tensor:

def item(self, tensor):
return tensor.item()

def power(self, a: Tensor, b: Tensor) -> Tensor:
"""
Returns the power of tensor a to the value of b.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the wording of this docstring seems somewhat imprecise. Can you fix this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"""
Returns the exponentiation of tensor a raised to b.
In the case b is a tensor, then the power is by element
with a as the base and b as the exponent.

Args:
  a: The tensor that contains the base.
  b: The tensor that contains a single scalar.
"""

This is the change I made to it, once we resolve the other request I will push the changes

In the case b is a tensor, then the power is by element
with a as the base and b as the exponent.
In the case b is a scalar, then the power of each value in a
is raised to the exponent of b.

Args:
a: The tensor that contains the base.
b: The tensor that contains the exponent or a single scalar.
"""
return a ** b
12 changes: 12 additions & 0 deletions tensornetwork/backends/pytorch/pytorch_backend_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -566,6 +566,18 @@ def test_matmul():
np.testing.assert_allclose(expected, actual)


def test_power():
np.random.seed(10)
backend = pytorch_backend.PyTorchBackend()
t1 = np.random.rand(10, 2, 3)
t2 = np.random.rand(10, 3, 4)
a = backend.convert_to_tensor(t1)
b = backend.convert_to_tensor(t2)
actual = backend.power(a, b)
expected = np.power(t1, t2)
np.testing.assert_allclose(expected, actual)


@pytest.mark.parametrize("dtype", torch_randn_dtypes)
@pytest.mark.parametrize("offset", range(-2, 2))
@pytest.mark.parametrize("axis1", [-2, 0])
Expand Down
14 changes: 14 additions & 0 deletions tensornetwork/backends/symmetric/symmetric_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -689,3 +689,17 @@ def matmul(self, tensor1: Tensor, tensor2: Tensor):
if (tensor1.ndim != 2) or (tensor2.ndim != 2):
raise ValueError("inputs to `matmul` have to be matrices")
return tensor1 @ tensor2

def power(self, a: Tensor, b: Tensor) -> Tensor:
"""
Returns the power of tensor a to the value of b.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The __pow__ method of BlockSparseTensor does currently not support broadcasting of any type, hence b has to be a scalar. Pls modify the docstring accordingly.

Copy link
Contributor Author

@LuiGiovanni LuiGiovanni Dec 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused with this request since this test works fine for me with no failures could you elaborate on what the problem is, please?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason the tests passes is due to a bug in the test (I had actually missed that in the earlier reviewa, see below).
in numpy, a**b below does different things depending on how a and b look like. Generally, numpy applies broadcasting (https://numpy.org/doc/stable/user/basics.broadcasting.html) rules when computing this expression. Typically, this approach is relatively widely used by other libraries as well (e.g. tensorflow or pytorch). Supporting broadcasting forBlockSparseTensor is however more complicated because this class has some non-trivial internal structure that needs to be respected by such operations. Currently we just don't support it. The only case supported by BlockSparseTensor is when b is a scalar-type object.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls add some test ensuring that a and b are BlockSparseTensors

In the case b is a tensor, then the power is by element
with a as the base and b as the exponent.
In the case b is a scalar, then the power of each value in a
is raised to the exponent of b.

Args:
a: The tensor that contains the base.
b: The tensor that contains the exponent or a single scalar.
"""
return a ** b
13 changes: 13 additions & 0 deletions tensornetwork/backends/symmetric/symmetric_backend_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -609,6 +609,19 @@ def test_addition_raises(R, dtype, num_charges):
backend.addition(a, c)


@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_power(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
b = BlockSparseTensor.random(a.sparse_shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls modify the test so that b is a numpy-scalar

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

backend.power should only take BlockSparseTensor objects, pls fix.

expected = np.power(a.data, b.data)
actual = backend.power(a.data, b.data)
np.testing.assert_allclose(expected, actual)


@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
Expand Down