Skip to content

Commit

Permalink
Merge branch 'master' of github.com:rsokl/MyGrad
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmascharka committed Feb 6, 2022
2 parents 8bb311e + a781700 commit c83a1a2
Show file tree
Hide file tree
Showing 46 changed files with 1,053 additions and 212 deletions.
22 changes: 22 additions & 0 deletions .github/workflows/nightly.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Nightly

on:
schedule:
- cron: '0 2 * * *' # run at 2 AM UTC

jobs:
test-against-pre-releases-of-dependencies:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v1
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install tox tox-gh-actions
- name: Test with tox
run: tox -e pre-release
10 changes: 5 additions & 5 deletions .github/workflows/tox_run.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
strategy:
max-parallel: 3
matrix:
python-version: [3.7, 3.8]
python-version: [3.7, 3.8, 3.9]
fail-fast: false

steps:
Expand Down Expand Up @@ -57,11 +57,11 @@ jobs:
run: tox -e coverage


py39:
py310:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9"]
python-version: ["3.10"]
fail-fast: false

steps:
Expand All @@ -74,8 +74,8 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install tox
- name: Python 3.9
run: tox -e py39
- name: Python 3.10
run: tox -e py310

minimum_numpy:
runs-on: ubuntu-latest
Expand Down
17 changes: 12 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
[![Documentation Status](https://readthedocs.org/projects/mygrad/badge/?version=latest)](https://mygrad.readthedocs.io/en/latest/?badge=latest)
[![Automated tests status](https://github.com/rsokl/MyGrad/workflows/Tests/badge.svg)](https://github.com/rsokl/MyGrad/actions?query=workflow%3ATests+branch%3Amaster)
[![PyPi version](https://img.shields.io/pypi/v/mygrad.svg)](https://pypi.python.org/pypi/mygrad)
![Python version support](https://img.shields.io/badge/python-3.7%20‐%203.9-blue.svg)
![Python version support](https://img.shields.io/badge/python-3.7%20‐%203.10-blue.svg)

# [MyGrad's Documentation](https://mygrad.readthedocs.io/en/latest/)

Expand All @@ -22,8 +22,15 @@ MyGrad is a lightweight library that adds automatic differentiation to NumPy –
array([2., 4., 6.])
```

MyGrad's primary goal is to make automatic differentiation an accessible and easy to use across the Python/NumPy ecosystem.
As such, it strives to behave and feel exactly like NumPy so that users need not learn yet another array-based math library.
MyGrad's primary goal is to make automatic differentiation accessible and easy to use across the Python/NumPy ecosystem.
As such, it strives to behave and feel exactly like NumPy so that users need not learn yet another array-based math library.
Of the various modes and flavors of auto-diff, MyGrad supports backpropagation from a scalar quantity.

Installing MyGrad:

```shell script
pip install mygrad
```

NumPy's ufuncs are richly supported; e.g. we can autodiff through in-place targets and boolean masks:

Expand Down Expand Up @@ -102,9 +109,9 @@ array([-1., 0., 10.])
The following is an example of using `mygrad` to compute the [hinge loss](https://en.wikipedia.org/wiki/Hinge_loss) of classification scores and to "backpropagate" through (compute the gradient of) this loss. This example demonstrates some of mygrad's ability to perform backpropagation through broadcasted operations, basic indexing, advanced indexing, and in-place assignments.

```python
>>> from mygrad import Tensor
>>> import mygrad as mg
>>> import numpy as np
>>> class_scores = Tensor(10 * np.random.rand(100, 10)) # 100 samples, 10 possible classes for each
>>> class_scores = 10 * mg.random.rand(100, 10) # 100 samples, 10 possible classes for each
>>> class_labels = np.random.randint(low=0, high=10, size=100) # correct label for each datum
>>> class_labels = (range(len(class_labels)), class_labels)
>>> correct_class_scores = class_scores[class_labels]
Expand Down
6 changes: 3 additions & 3 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
numpy==1.18.1
numba==0.51.2
llvmlite==0.34.0
sphinx==3.0.4
numpydoc>=1.0.0
sphinx-rtd-theme==0.5.0
sphinx==3.5.4
numpydoc==1.1.0
sphinx-rtd-theme==0.5.2
matplotlib>=3.0.0
42 changes: 42 additions & 0 deletions docs/source/changes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,48 @@ This is a record of all past mygrad releases and what went into them,
in reverse chronological order. All previous releases should still be available
on pip.

.. _v2.1.0:

------------------
2.1.0 - 2022-01-01
------------------

New Functions and Utilities
---------------------------

The following differentiable functions are now supported by MyGrad, and "drop-in" overrides for their NumPy counterparts are supported as well.

- :func:`~mygrad.atleast_1d`
- :func:`~mygrad.atleast_2d`
- :func:`~mygrad.atleast_3d`

Basic tensor save/load functionality has been added (thanks to @kw-0).

- :func:`~mygrad.save`
- :func:`~mygrad.load`

Improvements
------------

- :func:`~mygrad.clip` and ``Tensor.clip`` now accept an ``out`` target, permitting in-place operations.
- The method ``Tensor.__index__()`` is now implemented, which permits scalar integer-valued tensors to be used to index into Python sequences.
- Added Python 3.10 to our automated test matrix.

Compatibility-Breaking Changes
------------------------------

- In accordance with `NEP 29 <https://numpy.org/neps/nep-0029-deprecation_policy.html>`_ we are dropping support for NumPy versions below 1.19. However, MyGrad will not drop support for Python 3.7; to remain as lightweight and flexible as possible we will support minor versions of Python up until their EOL or until our minimal NumPy dependency drops support -- whichever occurs first.
- The interface to :func:`~mygrad.arange` was changed from ``arange(start, stop=None, step=None, ...)`` to ``arange([start,] stop[, step,], ...)``. This provides exact parity with NumPy's arange function.
- The derivatives of :func:`~mygrad.absolute` and :func:`~mygrad.linalg.norm` have been revised such that in cases where the derivatives used to be ``nan``, those entries will now be ``0``. Both functions can now be passed ``nan_to_num=False`` to enable the previous, more rigorous behavior. See `PR #379 <https://github.com/rsokl/MyGrad/pull/379>`_ for more details.

.. _v2.0.2:

------------------
2.0.2 - 2021-04-10
------------------

Exposes :func:`~mygrad.execute_op` at top-level namespace

.. _v2.0.1:

------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@


def setup(app):
app.add_stylesheet("my_theme.css")
app.add_css_file("my_theme.css")
# app.add_javascript("https://www.googletagmanager.com/gtag/js?id=UA-115029372-1")
# app.add_javascript("gtag.js")

Expand Down
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.atleast_1d.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.atleast_1d
=================

.. currentmodule:: mygrad

.. autofunction:: atleast_1d
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.atleast_2d.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.atleast_2d
=================

.. currentmodule:: mygrad

.. autofunction:: atleast_2d
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.atleast_3d.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.atleast_3d
=================

.. currentmodule:: mygrad

.. autofunction:: atleast_3d
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.load.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.load
===========

.. currentmodule:: mygrad

.. autofunction:: load
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.save.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.save
===========

.. currentmodule:: mygrad

.. autofunction:: save
20 changes: 12 additions & 8 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,27 +5,30 @@
MyGrad
======
MyGrad is a lightweight library that adds automatic differentiation to NumPy – its only dependency is NumPy!
MyGrad is a lightweight library that adds automatic differentiation to NumPy – its only
dependency is NumPy. Simply "drop in" a MyGrad tensor into your NumPy-based code, and
start differentiating!

.. code:: python
.. code-block:: pycon
>>> import mygrad as mg
>>> import numpy as np
>>> x = mg.tensor([1., 2., 3.]) # like numpy.array, but supports backprop!
>>> f = np.sum(x * x) # tensors work with numpy functions!
>>> x = mg.tensor([1., 2., 3.]) # like numpy.array, but supports backprop
>>> f = np.sum(x * x) # tensors can be passed directly to native numpy functions!
>>> f.backward() # triggers automatic differentiation
>>> x.grad # stores [df/dx0, df/dx1, df/dx2]
array([2., 4., 6.])
MyGrad's primary goal is to make automatic differentiation an accessible and easy to use across the Python/NumPy ecosystem.
As such, it strives to behave and feel exactly like NumPy so that users need not learn yet another array-based math library.
Of the various modes and flavors of auto-diff, MyGrad supports backpropagation from a scalar quantity.

NumPy's ufuncs are richly supported. We can even differentiate through an operation that occur in-place on a tensor and applies a boolean mask to
the results:

.. code:: python
.. code-block:: pycon
>>> x = mg.tensor([1., 2., 3.])
>>> y = mg.zeros_like(x)
Expand All @@ -39,7 +42,7 @@ NumPy's `view semantics <https://www.pythonlikeyoumeanit.com/Module3_Introducing
indexing and similar operations on tensors will produce a "view" of that tensor's data, thus a tensor and its view share memory.
This relationship will also manifest between the derivatives stored by a tensor and its views!

.. code:: python
.. code-block:: pycon
>>> x = mg.arange(9.).reshape(3, 3)
>>> diag_view = np.einsum("ii->i", x) # returns a view of the diagonal elements of `x`
Expand Down Expand Up @@ -74,7 +77,7 @@ This relationship will also manifest between the derivatives stored by a tensor
Basic and advanced indexing is fully supported

.. code:: python
.. code-block:: pycon
>>> (x[x < 4] ** 2).backward()
>>> x.grad
Expand All @@ -86,7 +89,7 @@ Basic and advanced indexing is fully supported
NumPy arrays and other array-likes play nicely with MyGrad's tensor. These behave like constants
during automatic differentiation

.. code:: python
.. code-block:: pycon
>>> x = mg.tensor([1., 2., 3.])
>>> constant = [-1., 0., 10] # can be a numpy array, list, or any other array-like
Expand All @@ -113,5 +116,6 @@ during automatic differentiation
math
indexing
nnet
io
graph_viz
changes
11 changes: 8 additions & 3 deletions docs/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ navigate to the MyGrad directory, then run:
Support for Python and NumPy
----------------------------
MyGrad abides by the `NEP 29 <https://numpy.org/neps/nep-0029-deprecation_policy.html>`_ recommendation, and adopts
a common “time window-based” policy for support of Python and NumPy versions.

Accordingly, MyGrad's drop schedule for Python and NumPy can be found `here <https://numpy.org/neps/nep-0029-deprecation_policy.html#drop-schedule>`_.
a common “time window-based” policy for support of NumPy versions. Accordingly, MyGrad's drop schedule for NumPy versions can be found `here <https://numpy.org/neps/nep-0029-deprecation_policy.html#drop-schedule>`_.

Note, however, that MyGrad will maintain a wider window of support for minor Python
versions than is specified by NEP 29. Because our only dependency is NumPy, and because
we strive to remain an exceptionally lightweight and flexible dependency to our users,
we will support minor versions of Python until their end of life, *or* until our lowest
supported version of NumPy drops support for that version of Python -- whichever occurs
first.
1 change: 1 addition & 0 deletions docs/source/intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ MyGrad is a lightweight library that adds automatic differentiation to NumPy –
Its primary goal is to make automatic differentiation an accessible and easy to use across the Python/NumPy ecosystem.
As such, it strives to behave and feel exactly like NumPy so that users need not learn yet another array-based math library.
You can pass MyGrad's :class:`~mygrad.Tensor` to NumPy's functions in order to make them differentiable!
Of the various modes and flavors of auto-diff, MyGrad supports backpropagation from a scalar quantity.


A Simple Application
Expand Down
12 changes: 12 additions & 0 deletions docs/source/io.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Input and Output
****************

.. currentmodule:: mygrad

NumPy binary files (NPY, NPZ)
-----------------------------
.. autosummary::
:toctree: generated/

save
load
31 changes: 18 additions & 13 deletions docs/source/operation.rst
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
Writing Your Own Operations
***************************

Let's write our own "multiply" operation.
Let's write our own "multiply" operation. There are two components to doing this:
- Defining an operation class (a subclass of :class:`~mygrad.operation_base.Operation`)
- Writing a function that ultimately calls ``mygrad.execute_op(YourOp, ...)``

.. code:: python
import numpy as np
import mygrad as mg
from mygrad import prepare_op
from mygrad import execute_op
from mygrad.operation_base import Operation
from mygrad.typing import ArrayLike
Expand Down Expand Up @@ -59,6 +61,9 @@ Let's write our own "multiply" operation.
x_arr = x.data
y_arr = y.data
# The operation need not incorporate specialized logic for
# broadcasting. The appropriate sum-reductions will be performed
# by MyGrad's autodiff system.
if index == 0: # backprop through a
return grad * y.data # ∂ℒ/∂x = (∂ℒ/∂f)(∂f/∂x)
elif index == 1: # backprop through b
Expand All @@ -67,22 +72,22 @@ Let's write our own "multiply" operation.
# Our function stitches together our operation class with the
# operation arguments via `mygrad.prepare_op`
def custom_multiply(x: ArrayLike, y: ArrayLike) -> mg.Tensor:
# `prepare_op` will take care of casting `x` and `y` to tensors if
# they are not already tensors.
return prepare_op(CustomMultiply, x, y)
def custom_multiply(x: ArrayLike, y: ArrayLike, constant=None) -> mg.Tensor:
# `execute_op` will take care of:
# - casting `x` and `y` to tensors if they are instead array-likes
# - propagating 'constant' status to the resulting output based on the inputs
# - handling in-place operations (specified via the `out` parameter)
return execute_op(CustomMultiply, x, y, constant=constant)
We can now use our differentiable function! It will automatically be compatible
with broadcasting; out operation need not account for broadcasting in either the
forward pass or the backward pass.
We can now use our differentiable function!

.. code:: pycon
>> x = mg.tensor(2.0)
>> y = mg.tensor([1.0, 2.0, 3.0])
>>> x = mg.tensor(2.0)
>>> y = mg.tensor([1.0, 2.0, 3.0])
>> custom_multiply(x, y).backward()
>> x.grad, y.grad
>>> custom_multiply(x, y).backward()
>>> x.grad, y.grad
(array(6.), array([2., 2., 2.]))
Documentation for mygrad.Operation
Expand Down
10 changes: 5 additions & 5 deletions docs/source/tensor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,17 +61,17 @@ graph - the graph is constructed as we carry out the forward-pass computation.
>>> = 2 * x + y ** 2

Invoking ``ℒ.backward()`` signals the computational graph to
compute the total-derivative of ``f`` with respect to each one of its dependent
compute the total-derivative of ```` with respect to each one of its dependent
variables. I.e. ``x.grad`` will store ``dℒ/dx`` and ``y.grad`` will store
``dℒ/dy``. Thus we have back-propagated a gradient from ``f`` through our graph.
``dℒ/dy``. Thus we have back-propagated a gradient from ```` through our graph.

Each tensor of derivatives is computed elementwise. That is, if ``x = Tensor(x0, x1, x2)``,
then ``dℒ/dx`` represents ``[dℒ/d(x0), dℒ/d(x1), dℒ/d(x2)]``

>>> ℒ.backward() # computes df/dx and df/dy
>>> x.grad # df/dx
>>> ℒ.backward() # computes dℒ/dx and dℒ/dy
>>> x.grad # dℒ/dx
array(6.0)
>>> y.grad # df/dy
>>> y.grad # dℒ/dy
array(4.0)
>>> ℒ.grad
array(1.0) # dℒ/dℒ
Expand Down
Loading

0 comments on commit c83a1a2

Please sign in to comment.