Skip to content

Commit

Permalink
lint, ci: pylint setup codecov
Browse files Browse the repository at this point in the history
  • Loading branch information
king-p3nguin committed Aug 23, 2024
1 parent b82c933 commit e1e1398
Show file tree
Hide file tree
Showing 58 changed files with 518 additions and 599 deletions.
22 changes: 13 additions & 9 deletions .github/workflows/ci-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,16 @@ on:
branches: [ master ]
pull_request:
branches: [ master ]
schedule:
- cron: '0 0 * * 0'

jobs:
build:

runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.11']
python-version: ["3.9", "3.10", "3.11"]

steps:
- uses: actions/checkout@v4
Expand All @@ -25,13 +27,15 @@ jobs:
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
pip install pylint pytest-cov codecov
# - name: Linting with pylint
# run: |
# pylint tensornetwork
- name: Linting with pylint
run: |
pylint tensornetwork
- name: Test with pytest
run: |
pytest --cov=./
# - name: Uploading coverage report
# run: |
# codecov
pytest --cov=./ --cov-report xml
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ./coverage.xml
fail_ci_if_error: true
22 changes: 15 additions & 7 deletions .pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -84,15 +84,23 @@ disable=
wrong-import-order,
wrong-import-position,
consider-using-f-string,
consider-using-enumerate,
too-many-branches,
unnecessary-dunder-call,
no-value-for-parameter,
implicit-str-concat,
redefined-builtin,
implicit-str-concat,
abstract-method,
arguments-differ,
possibly-used-before-assignment,
unnecessary-lambda-assignment,
unnecessary-list-index-lookup,
consider-using-generator,
[REPORTS]
# Set the output format. Available formats are text, parseable, colorized, msvs
# (visual studio) and html
output-format=text
# Put messages in a separate file for each module / package specified on the
# command line instead of printing them on stdout. Reports (if any) will be
# written in a file name "pylint_global.[txt|html]".
files-output=no
# Tells whether to display a full report or only the messages
# CHANGED:
reports=no
Expand Down Expand Up @@ -142,7 +150,7 @@ max-module-lines=1000
indent-string=" "
[BASIC]
# List of builtins function names that should not be used, separated by a comma
bad-functions=map,filter,apply,input
; bad-functions=map,filter,apply,input
# Regular expression which should only match correct module names
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
# Regular expression which should only match correct module level names
Expand Down Expand Up @@ -180,7 +188,7 @@ max-locals=15
# Maximum number of return / yield for function / method body
max-returns=6
# Maximum number of branch for function / method body
max-branchs=12
max-branches=12
# Maximum number of statements in function / method body
max-statements=50
# Maximum number of parents for a class (see R0901).
Expand Down Expand Up @@ -211,4 +219,4 @@ int-import-graph=
[EXCEPTIONS]
# Exceptions that will emit a warning when being caught. Defaults to
# "Exception"
overgeneral-exceptions=Exception
overgeneral-exceptions=builtins.Exception
54 changes: 35 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
<img src="https://user-images.githubusercontent.com/8702042/67589472-5a1d0e80-f70d-11e9-8812-64647814ae96.png" width="60%" height="60%">

[![Build Status](https://travis-ci.org/google/TensorNetwork.svg?branch=master)](https://travis-ci.org/google/TensorNetwork)

[![Continuous Integration](https://github.com/king-p3nguin/TensorNetwork2/actions/workflows/ci-build.yml/badge.svg?branch=master)](https://github.com/king-p3nguin/TensorNetwork2/actions/workflows/ci-build.yml)
[![codecov](https://codecov.io/gh/king-p3nguin/TensorNetwork2/graph/badge.svg?token=gayPQVD9Y0)](https://codecov.io/gh/king-p3nguin/TensorNetwork2)

A tensor network wrapper for TensorFlow, JAX, PyTorch, and Numpy.

For an overview of tensor networks please see the following:
For an overview of tensor networks please see the following:

- [Matrices as Tensor Network Diagrams](https://www.math3ma.com/blog/matrices-as-tensor-network-diagrams)


- [Crash Course in Tensor Networks (video)](https://www.youtube.com/watch?v=YN2YBB0viKo)

- [Hand-waving and interpretive dance: an introductory course on tensor networks](https://iopscience.iop.org/article/10.1088/1751-8121/aa6dc3)
Expand All @@ -28,8 +27,8 @@ More information can be found in our TensorNetwork papers:

- [TensorNetwork for Machine Learning](https://arxiv.org/abs/1906.06329)


## Installation

```
pip3 install tensornetwork
```
Expand All @@ -38,30 +37,32 @@ pip3 install tensornetwork

For details about the TensorNetwork API, see the [reference documentation.](https://tensornetwork.readthedocs.io)


## Tutorials

[Basic API tutorial](https://colab.research.google.com/drive/1Fp9DolkPT-P_Dkg_s9PLbTOKSq64EVSu)

[Tensor Networks inside Neural Networks using Keras](https://colab.research.google.com/github/google/TensorNetwork/blob/master/colabs/Tensor_Networks_in_Neural_Networks.ipynb)

## Basic Example

Here, we build a simple 2 node contraction.

```python
import numpy as np
import tensornetwork as tn

# Create the nodes
a = tn.Node(np.ones((10,)))
a = tn.Node(np.ones((10,)))
b = tn.Node(np.ones((10,)))
edge = a[0] ^ b[0] # Equal to tn.connect(a[0], b[0])
final_node = tn.contract(edge)
print(final_node.tensor) # Should print 10.0
```

## Optimized Contractions.
## Optimized Contractions

Usually, it is more computationally effective to flatten parallel edges before contracting them in order to avoid trace edges.
We have `contract_between` and `contract_parallel` that do this automatically for your convenience.
We have `contract_between` and `contract_parallel` that do this automatically for your convenience.

```python
# Contract all of the edges between a and b
Expand All @@ -70,13 +71,15 @@ c = tn.contract_between(a, b)
# This is the same as above, but much shorter.
c = a @ b

# Contract all of edges that are parallel to edge
# Contract all of edges that are parallel to edge
# (parallel means connected to the same nodes).
c = tn.contract_parallel(edge)
```

## Split Node
You can split a node by doing a singular value decomposition.

You can split a node by doing a singular value decomposition.

```python
# This will return two nodes and a tensor of the truncation error.
# The two nodes are the unitary matrices multiplied by the square root of the
Expand All @@ -88,9 +91,11 @@ u_s, vh_s, trun_error = tn.split_node(node, left_edges, right_edges)
u, s, vh, trun_error = tn.split_node_full_svd(node, left_edges, right_edges)
```

## Node and Edge names.
You can optionally name your nodes/edges. This can be useful for debugging,
## Node and Edge names

You can optionally name your nodes/edges. This can be useful for debugging,
as all error messages will print the name of the broken edge/node.

```python
node = tn.Node(np.eye(2), name="Identity Matrix")
print("Name of node: {}".format(node.name))
Expand All @@ -101,15 +106,19 @@ final_result = tn.contract(edge, name="Trace Of Identity")
print("Name of new node after contraction: {}".format(final_result.name))
```

## Named axes.
## Named axes

To make remembering what an axis does easier, you can optionally name a node's axes.

```python
a = tn.Node(np.zeros((2, 2)), axis_names=["alpha", "beta"])
edge = a["beta"] ^ a["alpha"]
```

## Edge reordering.
## Edge reordering

To assert that your result's axes are in the correct order, you can reorder a node at any time during computation.

```python
a = tn.Node(np.zeros((1, 2, 3)))
e1 = a[0]
Expand All @@ -121,8 +130,10 @@ a.reorder_edges([e3, e1, e2])
print(a.tensor.shape) # Should print (3, 1, 2)
```

## NCON interface.
## NCON interface

For a more compact specification of a tensor network and its contraction, there is `ncon()`. For example:

```python
from tensornetwork import ncon
a = np.ones((2, 2))
Expand All @@ -131,38 +142,43 @@ c = ncon([a, b], [(-1, 1), (1, -2)])
print(c)
```

## Different backend support.
## Different backend support

Currently, we support JAX, TensorFlow, PyTorch and NumPy as TensorNetwork backends.
We also support tensors with Abelian symmetries via a `symmetric` backend, see the [reference
documentation](https://tensornetwork.readthedocs.io/en/latest/block_sparse_tutorial.html) for more details.

To change the default global backend, you can do:

```python
tn.set_default_backend("jax") # tensorflow, pytorch, numpy, symmetric
```

Or, if you only want to change the backend for a single `Node`, you can do:

```python
tn.Node(tensor, backend="jax")
```

If you want to run your contractions on a GPU, we highly recommend using JAX, as it has the closet API to NumPy.

## Disclaimer

This library is in *alpha* and will be going through a lot of breaking changes. While releases will be stable enough for research, we do not recommend using this in any production environment yet.

TensorNetwork is not an official Google product. Copyright 2019 The TensorNetwork Developers.

## Citation

If you are using TensorNetwork for your research please cite this work using the following bibtex entry:

```
@misc{roberts2019tensornetwork,
title={TensorNetwork: A Library for Physics and Machine Learning},
title={TensorNetwork: A Library for Physics and Machine Learning},
author={Chase Roberts and Ashley Milsted and Martin Ganahl and Adam Zalcman and Bruce Fontaine and Yijian Zou and Jack Hidary and Guifre Vidal and Stefan Leichenauer},
year={2019},
eprint={1905.01330},
archivePrefix={arXiv},
primaryClass={physics.comp-ph}
}
```

3 changes: 2 additions & 1 deletion requirements-dev.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
tensorflow
pytest
pytest-xdist
pytest-cov
torch
jax
jaxlib
pylint
black
21 changes: 10 additions & 11 deletions tensornetwork/backends/abstract_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Callable, List, Optional, Sequence, Tuple, Type, Union
from typing import Any, Callable, Optional, Sequence, Tuple, Type, Union

import numpy as np

Expand Down Expand Up @@ -363,7 +363,7 @@ def eigh(self, matrix: Tensor):
def eigs(
self,
A: Callable,
args: Optional[List[Tensor]] = None,
args: Optional[list[Tensor]] = None,
initial_state: Optional[Tensor] = None,
shape: Optional[Tuple[int, ...]] = None,
dtype: Optional[Type[np.number]] = None, # pylint: disable=no-member
Expand All @@ -372,7 +372,7 @@ def eigs(
tol: float = 1e-8,
which: str = "LR",
maxiter: Optional[int] = None,
) -> Tuple[Tensor, List]:
) -> Tuple[Tensor, list]:
"""Arnoldi method for finding the lowest eigenvector-eigenvalue pairs
of a linear operator `A`. `A` is a callable implementing the
matrix-vector product. If no `initial_state` is provided then
Expand Down Expand Up @@ -413,7 +413,7 @@ def eigs(
def eigsh(
self,
A: Callable,
args: Optional[List[Tensor]] = None,
args: Optional[list[Tensor]] = None,
initial_state: Optional[Tensor] = None,
shape: Optional[Tuple[int, ...]] = None,
dtype: Optional[Type[np.number]] = None, # pylint: disable=no-member
Expand All @@ -422,7 +422,7 @@ def eigsh(
tol: float = 1e-8,
which: str = "LR",
maxiter: Optional[int] = None,
) -> Tuple[Tensor, List]:
) -> Tuple[Tensor, list]:
"""Lanczos method for finding the lowest eigenvector-eigenvalue pairs
of a symmetric (hermitian) linear operator `A`. `A` is a callable
implementing the matrix-vector product. If no `initial_state` is provided
Expand Down Expand Up @@ -463,7 +463,7 @@ def eigsh(
def eigsh_lanczos(
self,
A: Callable,
args: Optional[List[Tensor]] = None,
args: Optional[list[Tensor]] = None,
initial_state: Optional[Tensor] = None,
shape: Optional[Tuple[int, ...]] = None,
dtype: Optional[Type[np.number]] = None, # pylint: disable=no-member
Expand All @@ -473,7 +473,7 @@ def eigsh_lanczos(
delta: float = 1e-8,
ndiag: int = 20,
reorthogonalize: bool = False,
) -> Tuple[Tensor, List]:
) -> Tuple[Tensor, list]:
"""
Lanczos method for finding the lowest eigenvector-eigenvalue pairs
of `A`.
Expand Down Expand Up @@ -517,7 +517,7 @@ def gmres(
self,
A_mv: Callable,
b: Tensor,
A_args: Optional[List] = None,
A_args: Optional[list] = None,
A_kwargs: Optional[dict] = None,
x0: Optional[Tensor] = None,
tol: float = 1e-05,
Expand Down Expand Up @@ -639,8 +639,7 @@ def gmres(

x0 = self.reshape(x0, (N,))

if num_krylov_vectors > N:
num_krylov_vectors = N
num_krylov_vectors = min(num_krylov_vectors, N)

if tol < 0:
raise ValueError(f"tol = {tol} must be positive.")
Expand Down Expand Up @@ -668,7 +667,7 @@ def _gmres(
self,
A_mv: Callable,
b: Tensor,
A_args: List,
A_args: list,
A_kwargs: dict,
x0: Tensor,
tol: float,
Expand Down
4 changes: 2 additions & 2 deletions tensornetwork/backends/backend_factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
}

# we instantiate each backend only once and store it here
_INSTANTIATED_BACKENDS = dict()
_INSTANTIATED_BACKENDS = {}


def get_backend(
Expand All @@ -39,7 +39,7 @@ def get_backend(
if isinstance(backend, abstract_backend.AbstractBackend):
return backend
if backend not in _BACKENDS:
raise ValueError("Backend '{}' does not exist".format(backend))
raise ValueError(f"Backend '{backend}' does not exist")

if backend in _INSTANTIATED_BACKENDS:
return _INSTANTIATED_BACKENDS[backend]
Expand Down
Loading

0 comments on commit e1e1398

Please sign in to comment.