-
Notifications
You must be signed in to change notification settings - Fork 359
Added power function to backend to new backends #889
base: master
Are you sure you want to change the base?
Conversation
…key decomposition.
This reverts commit 81b4d8b.
Hello @alewis @mganahl! Running the test from pylint seems to cause and issue which I cannot resolve, it states that Python does not exist in module tensorflow, I assume the issue is init.py thought updating it with a small change such as adding a In my search I found that this issue is not some enclosed issue, I found some people raising this issue as well in other places here are some examples:
I will continue my search for a solution, my main problem being that we can't resolve the tensordot issue without resolving this first, I would love some help or feedback. Thank you so much in advance for all the help and support you have given me! |
|
||
def power(self, a: Tensor, b: Tensor) -> Tensor: | ||
""" | ||
Returns the power of tensor a to the value of b. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the wording of this docstring seems somewhat imprecise. Can you fix this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""
Returns the exponentiation of tensor a raised to b.
In the case b is a tensor, then the power is by element
with a as the base and b as the exponent.
Args: a: The tensor that contains the base. b: The tensor that contains a single scalar. """
This is the change I made to it, once we resolve the other request I will push the changes
|
||
def power(self, a: Tensor, b: Tensor) -> Tensor: | ||
""" | ||
Returns the power of tensor a to the value of b. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The __pow__
method of BlockSparseTensor
does currently not support broadcasting of any type, hence b
has to be a scalar. Pls modify the docstring accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused with this request since this test works fine for me with no failures could you elaborate on what the problem is, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason the tests passes is due to a bug in the test (I had actually missed that in the earlier reviewa, see below).
in numpy, a**b
below does different things depending on how a
and b
look like. Generally, numpy applies broadcasting (https://numpy.org/doc/stable/user/basics.broadcasting.html) rules when computing this expression. Typically, this approach is relatively widely used by other libraries as well (e.g. tensorflow or pytorch). Supporting broadcasting forBlockSparseTensor
is however more complicated because this class has some non-trivial internal structure that needs to be respected by such operations. Currently we just don't support it. The only case supported by BlockSparseTensor
is when b
is a scalar-type object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls add some test ensuring that a
and b
are BlockSparseTensor
s
R = 4 | ||
backend = symmetric_backend.SymmetricBackend() | ||
a = get_tensor(R, num_charges, dtype) | ||
b = BlockSparseTensor.random(a.sparse_shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pls modify the test so that b
is a numpy-scalar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
backend.power
should only take BlockSparseTensor
objects, pls fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks again for the PR! Left some comments to be fixed. Thanks!
|
||
def power(self, a: Tensor, b: Tensor) -> Tensor: | ||
""" | ||
Returns the power of tensor a to the value of b. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason the tests passes is due to a bug in the test (I had actually missed that in the earlier reviewa, see below).
in numpy, a**b
below does different things depending on how a
and b
look like. Generally, numpy applies broadcasting (https://numpy.org/doc/stable/user/basics.broadcasting.html) rules when computing this expression. Typically, this approach is relatively widely used by other libraries as well (e.g. tensorflow or pytorch). Supporting broadcasting forBlockSparseTensor
is however more complicated because this class has some non-trivial internal structure that needs to be respected by such operations. Currently we just don't support it. The only case supported by BlockSparseTensor
is when b
is a scalar-type object.
|
||
def power(self, a: Tensor, b: Tensor) -> Tensor: | ||
""" | ||
Returns the power of tensor a to the value of b. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls add some test ensuring that a
and b
are BlockSparseTensor
s
R = 4 | ||
backend = symmetric_backend.SymmetricBackend() | ||
a = get_tensor(R, num_charges, dtype) | ||
b = BlockSparseTensor.random(a.sparse_shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
backend.power
should only take BlockSparseTensor
objects, pls fix.
This PR implements the power function specifically to symmetric and PyTorch backends with their respective test.
The changes were implemented in the files:
This is in regards to the issue #840.