Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MIMO, NotMNIST, improve coverage, and Misc #42

Merged
merged 43 commits into from
Aug 29, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
d83a3d1
:lipstick: Change two title levels
o-laurent Aug 14, 2023
ddd1622
:hammer: Load weights on cpu & refine ReadMe
o-laurent Aug 15, 2023
66b9995
:ok_hand: Fix flake8 error
o-laurent Aug 15, 2023
214203e
:bug: Fix deep ens model & improve docs
o-laurent Aug 16, 2023
e3f2f56
:white_check_mark: Add tests for de
o-laurent Aug 16, 2023
7e514e5
:heavy_check_mark: Switch to u20.04 to fix tests internet access
o-laurent Aug 16, 2023
9c4241f
Revert ":heavy_check_mark: Switch to u20.04 to fix tests internet acc…
o-laurent Aug 16, 2023
d56062e
:heavy_check_mark: Improve test coverage
o-laurent Aug 16, 2023
3063f14
:white_check_mark: Continue improving test coverage
o-laurent Aug 16, 2023
2e76869
:white_check_mark: Improve classification cov. & fix logits
o-laurent Aug 16, 2023
2af2201
:book: Format tutorials
o-laurent Aug 16, 2023
c3dcfdd
:book: Slightly improve the contribution page
o-laurent Aug 16, 2023
dec88f7
:shirt: Fix title line lengths
o-laurent Aug 16, 2023
d1865d9
:zap: Update packs. & switch to sph-gal 0.14
o-laurent Aug 20, 2023
0c44a40
:hammer: Refactor calibration methods dir
o-laurent Aug 22, 2023
f252ac1
:shirt: Improve losses and add somes tests
o-laurent Aug 22, 2023
3f797f2
:sparkles: Add notMNIST dataset
o-laurent Aug 22, 2023
cdff4d0
:sparkles: Add notMNIST to MNIST's oods
o-laurent Aug 23, 2023
de31452
:sparkles: Add dropout to Resnet std
o-laurent Aug 23, 2023
ace691a
:book: Update README.md
o-laurent Aug 24, 2023
2855a70
:hammer: Switch to explicit binary metrics in cls routine
o-laurent Aug 24, 2023
e00f34a
Merge branch 'dev' of github.com:ENSTA-U2IS/torch-uncertainty into dev
o-laurent Aug 24, 2023
2386247
:fire: Remove the Gen. Jensen-Shannon div.
o-laurent Aug 24, 2023
f6051c8
:white_check_mark: Improve tests
o-laurent Aug 24, 2023
41773ee
:white_check_mark: Improve tests
o-laurent Aug 24, 2023
126fcd2
:white_check_mark: Improve WRN tests
o-laurent Aug 24, 2023
2a85366
:sparkles: Add Mixup and Cutmix to cls routine
o-laurent Aug 25, 2023
d3d499c
:bug: Fix Stochastic model
o-laurent Aug 25, 2023
98aa65b
:white_check_mark: Improve coverage
o-laurent Aug 25, 2023
c25a61f
:ok_hand: Fix E261
o-laurent Aug 25, 2023
368c1e9
:white_check_mark: Finish bayesian test coverage
o-laurent Aug 25, 2023
009b642
:sparkles: Include MIMO version for ResNet and WideResNet
alafage Aug 25, 2023
c6e9333
:books: Update README.md
alafage Aug 25, 2023
61eb18e
:bug: Attempt at fixing test error on GitHub
alafage Aug 25, 2023
f8829c6
:bug: Another attempt
alafage Aug 25, 2023
a0f1348
Merge pull request #41 from ENSTA-U2IS/add-mimo
o-laurent Aug 25, 2023
588a1a0
:white_check_mark: Fix tests
o-laurent Aug 25, 2023
a9ad38e
:wrench: Switch PR event workflow ; closes #29
o-laurent Aug 25, 2023
f98d2c8
:shirt: Remove warning for format_batch_fn double save
o-laurent Aug 25, 2023
54ffef3
:wrench: Properly exclude tests in coverage
o-laurent Aug 25, 2023
3f49f11
:wrench: Properly exclude tests from cov.
o-laurent Aug 25, 2023
13a6439
:white_check_mark: Add tests for new transforms
alafage Aug 26, 2023
9ed8c4c
:sparkles: Add a CLI arg to enable resuming training
o-laurent Aug 26, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .coveragerc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[run]
branch = True
include = */torch-uncertainty/*
omit = *tests*, */datasets/*, setup.py
omit = */tests/*, */datasets/*, setup.py

[report]
exclude_lines =
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ on:
branches:
- main
- dev
pull_request_target:
pull_request:
schedule:
- cron: "42 7 * * 0"
workflow_dispatch:
Expand Down Expand Up @@ -76,7 +76,7 @@ jobs:

- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
if: ${{ github.event_name != 'pull_request_target' }}
if: ${{ github.event_name != 'pull_request' }}
continue-on-error: true
with:
token: ${{ secrets.CODECOV_TOKEN }}
Expand Down
14 changes: 7 additions & 7 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ contributors to help us build a comprehensive library for uncertainty
quantification in PyTorch.

We are particularly open to any comment that you would have on this project.
In particular, we are open to changing these guidelines as the project evolves.
Specifically, we are open to changing these guidelines as the project evolves.

## The scope of TorchUncertainty

Expand All @@ -21,7 +21,7 @@ Monte Carlo dropout, ensemble methods, etc.

If you are interested in contributing to torch_uncertainty, we first advise you
to follow the following steps to reproduce a clean development environment
ensuring continuous integration does not break.
ensuring that continuous integration does not break.

1. Install poetry on your workstation.
2. Clone the repository.
Expand All @@ -37,21 +37,21 @@ poetry install --with dev
pre-commit install
```

We are using black for code formatting, flake8 for linting, and isort for the
imports. The pre-commit hooks will ensure that your code is properly formatted
We are using `black` for code formatting, `flake8` for linting, and `isort` for the
imports. The `pre-commit` hooks will ensure that your code is properly formatted
and linted before committing.

Before submitting a final pull request, that we will review, please try your
best not to reduce the code coverage and do document your code.
best not to reduce the code coverage and document your code.

If you implement a method, please add a reference to the corresponding paper in the ["References" page](https://torch-uncertainty.github.io/references.html).

### Post-processing methods

For now, we intend to follow scikit-learn style API for post-processing
methods (except that we use a validation dataloader for now). You can get
methods (except that we use a validation dataset for now). You can get
inspiration from the already implemented
[temperature-scaling](https://github.com/ENSTA-U2IS/torch-uncertainty/blob/dev/torch_uncertainty/post_processing/temperature_scaler.py).
[temperature-scaling](https://github.com/ENSTA-U2IS/torch-uncertainty/blob/dev/torch_uncertainty/post_processing/calibration/temperature_scaler.py).

## License

Expand Down
15 changes: 9 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,16 @@
[![Code style: black](https://img.shields.io/badge/code%20style-black-black.svg)](https://github.com/psf/black)
</div>

_TorchUncertainty_ is a package designed to help you leverage uncertainty quantification techniques and make your neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!
_TorchUncertainty_ is a package designed to help you leverage uncertainty quantification techniques and make your deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!

:construction: _TorchUncertainty_ is in early development :construction: - expect massive changes but reach out and contribute if you are interested by the project!
:construction: _TorchUncertainty_ is in early development :construction: - expect massive changes, but reach out and contribute if you are interested in the project! **Please raise an issue if you have any bugs or difficulties.**

---

This package provides a multi-level API, including:

- ready-to-train baselines on research datasets, such as ImageNet and CIFAR
- baselines available for training on your datasets
- deep learning baselines available for training on your datasets
- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR (work in progress 🚧).
- layers available for use in your networks
- scikit-learn style post-processing methods such as Temperature Scaling
Expand All @@ -27,12 +27,14 @@ See the [Reference page](https://torch-uncertainty.github.io/references.html) or

## Installation

Install the desired pytorch version in your environment. Then, the package can be installed from PyPI:
The package can be installed from PyPI:

```sh
pip install torch-uncertainty
```

Then, install the desired PyTorch version in your environment.

If you aim to contribute (thank you!), have a look at the [contribution page](https://torch-uncertainty.github.io/contributing.html).

## Getting Started and Documentation
Expand All @@ -45,17 +47,18 @@ A quickstart is available at [torch-uncertainty.github.io/quickstart](https://to

### Baselines

To date, the following baselines are implemented:
To date, the following deep learning baselines have been implemented:

- Deep Ensembles
- BatchEnsemble
- Masksembles
- MIMO
- Packed-Ensembles (see [blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873))
- Bayesian Neural Networks

### Post-processing methods

To date, the following post-processing methods are implemented:
To date, the following post-processing methods have been implemented:

- Temperature, Vector, & Matrix scaling

Expand Down
25 changes: 14 additions & 11 deletions auto_tutorials_source/tutorial_bayesian.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# -*- coding: utf-8 -*-
# fmt: off
# flake: noqa
"""
Train a Bayesian Neural Network in Three Minutes
Expand Down Expand Up @@ -41,7 +40,7 @@
from torch_uncertainty.routines.classification import ClassificationSingle

# %%
# We will also need to define an optimizer using torch.optim as well as the
# We will also need to define an optimizer using torch.optim as well as the
# neural network utils withing torch.nn, as well as the partial util to provide
# the modified default arguments for the ELBO loss.
#
Expand All @@ -61,13 +60,15 @@
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# We will use the Adam optimizer with the default learning rate of 0.001.


def optim_lenet(model: nn.Module) -> dict:
optimizer = optim.Adam(
model.parameters(),
lr=1e-3,
)
return {"optimizer": optimizer}


# %%
# 3. Creating the necessary variables
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -76,15 +77,18 @@ def optim_lenet(model: nn.Module) -> dict:
# logs, and to fake-parse the arguments needed for using the PyTorch Lightning
# Trainer. We also create the datamodule that handles the MNIST dataset,
# dataloaders and transforms. Finally, we create the model using the
# blueprint from torch_uncertainty.models.
# blueprint from torch_uncertainty.models.

root = Path(os.path.abspath(""))

with ArgvContext("--max_epochs 1"):
# We mock the arguments for the trainer
with ArgvContext(
"file.py",
"--max_epochs 1",
"--enable_progress_bar=False",
"--verbose=False",
):
args = init_args(datamodule=MNISTDataModule)
args.enable_progress_bar = False
args.verbose = False
args.max_epochs = 1

net_name = "bayesian-lenet-mnist"

Expand Down Expand Up @@ -156,21 +160,20 @@ def imshow(img):
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()


dataiter = iter(dm.val_dataloader())
images, labels = next(dataiter)

# print images
imshow(torchvision.utils.make_grid(images[:4, ...]))
print('Ground truth: ', ' '.join(f'{labels[j]}' for j in range(4)))
print("Ground truth: ", " ".join(f"{labels[j]}" for j in range(4)))

logits = model(images)
probs = torch.nn.functional.softmax(logits, dim=-1)

_, predicted = torch.max(probs, 1)

print(
'Predicted digits: ', ' '.join(f'{predicted[j]}' for j in range(4))
)
print("Predicted digits: ", " ".join(f"{predicted[j]}" for j in range(4)))

# %%
# References
Expand Down
Loading