Skip to content

Commit

Permalink
Fix doc build errors
Browse files Browse the repository at this point in the history
  • Loading branch information
shi-eric committed Sep 30, 2024
1 parent 7f5acd7 commit ab147a0
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 3 deletions.
1 change: 0 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,6 @@
"numpy": ("https://numpy.org/doc/stable", None),
"jax": ("https://jax.readthedocs.io/en/latest", None),
"pytorch": ("https://pytorch.org/docs/stable", None),
"paddle": ("https://www.paddlepaddle.org.cn/", None),
}

extlinks = {
Expand Down
4 changes: 2 additions & 2 deletions docs/modules/interoperability.rst
Original file line number Diff line number Diff line change
Expand Up @@ -768,7 +768,7 @@ To convert a Paddle CUDA stream to a Warp CUDA stream and vice versa, Warp provi
.. autofunction:: warp.stream_from_paddle

Example: Optimization using ``warp.from_paddle()``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

An example usage of minimizing a loss function over an array of 2D points written in Warp via Paddle's Adam optimizer
using :func:`warp.from_paddle` is as follows::
Expand Down Expand Up @@ -812,7 +812,7 @@ using :func:`warp.from_paddle` is as follows::
print(f"{i}\tloss: {l.item()}")

Example: Optimization using ``warp.to_paddle``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Less code is needed when we declare the optimization variables directly in Warp and use :func:`warp.to_paddle` to convert them to Paddle tensors.
Here, we revisit the same example from above where now only a single conversion to a paddle tensor is needed to supply Adam with the optimization variables::
Expand Down

0 comments on commit ab147a0

Please sign in to comment.