Skip to content

Commit

Permalink
Add links to the paper (#63)
Browse files Browse the repository at this point in the history
* Add link to the paper in UPGrad's docstring
* Add link to the paper in README.md
* Add citation section at the end of README.md
* Add hint admonition with link to paper in iwrm.rst
* Add link to the paper in the main documentation page
* Update link to paper in CHANGELOG.md
* Add hint with link to the paper in aggregation index
  • Loading branch information
ValerianRey authored Jun 25, 2024
1 parent 5b9790c commit fe46d3d
Show file tree
Hide file tree
Showing 6 changed files with 27 additions and 4 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `Sum` to sum the rows of the matrix.
- `TrimmedMean` from [Byzantine-Robust Distributed Learning: Towards
Optimal Statistical Rates](https://proceedings.mlr.press/v80/yin18a/yin18a.pdf).
- `UPGrad` from [Jacobian Descent for Multi-Objective Optimization](https://arxiv.org/search/?query=jacobian+descent+for+multi-objective+optimization&searchtype=all&source=header).
- `UPGrad` from [Jacobian Descent for Multi-Objective Optimization](https://arxiv.org/pdf/2406.16232).
- `backward` function to perform a step of Jacobian descent.
- Documentation of the public API and of some usage examples.
- Tests:
Expand Down
13 changes: 12 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ![image](docs/source/icons/favicon-32x32.png) TorchJD

TorchJD is a library enabling Jacobian descent with PyTorch, for optimization of neural networks
TorchJD is a library enabling [Jacobian descent](https://arxiv.org/pdf/2406.16232) with PyTorch, for optimization of neural networks
with multiple objectives.

> [!IMPORTANT]
Expand All @@ -25,3 +25,14 @@ TorchJD requires python 3.10, 3.11 or 3.12. It is only compatible with recent ve
## Contribution

Please read the [Contribution page](CONTRIBUTING.md).

## Citation
If you use TorchJD for your research, please cite:
```
@article{jacobian_descent,
title={Jacobian Descent For Multi-Objective Optimization},
author={Quinton, Pierre and Rey, Valérian},
journal={arXiv preprint arXiv:2406.16232},
year={2024}
}
```
4 changes: 4 additions & 0 deletions docs/source/docs/aggregation/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,3 +27,7 @@ This package provides several aggregators from the literature:
sum.rst
trimmed_mean.rst
upgrad.rst

.. hint::
Most of these aggregators are analyzed theoretically in `Jacobian Descent For Multi-Objective
Optimization <https://arxiv.org/pdf/2406.16232>`_.
5 changes: 5 additions & 0 deletions docs/source/examples/iwrm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@ This example shows how to use TorchJD to minimize the vector of per-instance los
paradigm, called IWRM, is multi-objective, as opposed to the usual empirical risk minimization
(ERM), which seeks to minimize the average loss.

.. hint::
A proper definition of IWRM and its empirical results on some deep learning tasks are
available in `Jacobian Descent For Multi-Objective Optimization
<https://arxiv.org/pdf/2406.16232>`_.

For the sake of the example, we generate a fake dataset consisting of 8 batches of 16 random input
vectors of dimension 10, and their corresponding scalar labels. We train a very simple regression
model to retrieve the label from the corresponding input. To minimize the average loss, we use
Expand Down
4 changes: 3 additions & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,9 @@
|
TorchJD is a library enabling Jacobian descent with PyTorch, for optimization of neural networks
with multiple objectives.
with multiple objectives. It is based on the theory from `Jacobian Descent For Multi-Objective
Optimization <https://arxiv.org/pdf/2406.16232>`_ and it contains algorithms from many other related
papers.

.. important::
This library is currently in an early development stage. The API is subject to significant changes
Expand Down
3 changes: 2 additions & 1 deletion src/torchjd/aggregation/upgrad.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@
class UPGrad(_WeightedAggregator):
"""
:class:`~torchjd.aggregation.bases.Aggregator` that projects each row of the input matrix onto
the dual cone of all rows of this matrix, and that combines the result.
the dual cone of all rows of this matrix, and that combines the result, as proposed in
`Jacobian Descent For Multi-Objective Optimization <https://arxiv.org/pdf/2406.16232>`_.
:param pref_vector: The preference vector used to combine the projected rows. If not provided,
defaults to the simple averaging of the projected rows.
Expand Down

0 comments on commit fe46d3d

Please sign in to comment.