Skip to content

Commit

Permalink
Merge branch 'main' into anonymous
Browse files Browse the repository at this point in the history
  • Loading branch information
meghshukla authored Jun 3, 2024
2 parents 6635984 + 801833f commit 33f8a48
Show file tree
Hide file tree
Showing 5 changed files with 33 additions and 18 deletions.
1 change: 0 additions & 1 deletion HumanPose/code/loss.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import torch
from typing import Union


from utils.tic import get_positive_definite_matrix, get_tic_covariance
from models.vit_pose.ViTPose import ViTPose
from models.stacked_hourglass.StackedHourglass import PoseNet as Hourglass
Expand Down
3 changes: 2 additions & 1 deletion HumanPose/code/utils/tic.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ def calculate_ll_per_sample(y_pred: torch.Tensor, precision_hat: torch.Tensor,

def _predictions_hg(hg_level_6: Hourglass, hg_feat: Hourglass,
hg_out: Hourglass) -> Callable[[torch.Tensor], torch.Tensor]:

"""
Obtains the model's target predictions for an input.
This function is used in conjunction with vmap.
Expand All @@ -134,7 +135,7 @@ def pred(x: torch.Tensor) -> torch.Tensor:
return soft_argmax(x).view(x.shape[0], -1)
return pred


def _predictions_vitpose(x: torch.Tensor, imgs: torch.Tensor, vitpose: ViTPose) -> torch.Tensor:
"""
Obtains the model's target predictions for an input.
Expand Down
39 changes: 27 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,36 @@
![TIC-TAC](https://github.com/meghshukla/TIC-TAC/blob/anonymous/TIC-TAC_gif.gif)
![TIC-TAC](https://github.com/vita-epfl/TIC-TAC/blob/main/TIC-TAC_gif.gif)


# TIC-TAC: A Framework For Improved Covariance Estimation In Deep Heteroscedastic Regression

<a href="https://arxiv.org/abs/2310.18953"><img alt="arXiv" src="https://img.shields.io/badge/arXiv-2310.18953-%23B31B1B?logo=arxiv&logoColor=white"></a>
<a href="https://www.epfl.ch/labs/vita/heteroscedastic-regression/"><img alt="Project" src="https://img.shields.io/badge/-Project%20Page-lightgrey?logo=Google%20Chrome&color=informational&logoColor=white"></a>
<a href="https://openreview.net/forum?id=zdNTiTs5gU"><img alt="OpenReview" src="https://img.shields.io/badge/ICML%202024-OpenReview-%236DA252"></a>
<a href="https://hub.docker.com/repository/docker/meghshukla/tictac/"><img alt="Docker" src="https://img.shields.io/badge/Image-tictac-%232496ED?logo=docker&logoColor=white"></a>
<br>




<br>

Code repository for "TIC-TAC: A Framework For Improved Covariance Estimation In Deep Heteroscedastic Regression". We address the problem of sub-optimal covariance estimation in deep heteroscedastic regression by proposing a new parameterisation (TIC) and metric (TAC). We derive a new expression, the _Taylor Induced Covariance (TIC)_, which expresses the randomness of the prediction through its gradient and curvature. The _Task Agnostic Correlations (TAC)_ metric leverages the conditioning property of the normal distribution to evaluate the covariance quantitatively.


## Table of contents
1. [Installation: Docker (recommended) or PIP](#installation)
1. [Organization](#organization)
1. [Code Execution](#execution)
1. [Citation](#citation)
2. [Organization](#organization)
3. [Code Execution](#execution)
4. [Acknowledgement](#acknowledgement)
5. [Citation](#citation)


## Installation: Docker (recommended) or PIP <a name="installation"></a>

**Docker**: We provide a Docker image which is pre-installed with all required packages. We recommend using this image to ensure reproducibility of our results. Using this image requires setting up Docker on Ubuntu: [Docker](https://docs.docker.com/engine/install/ubuntu/#installation-methods). Once installed, we can use the provided `docker-compose.yaml` file to start our environment with the following command: `docker-compose run --rm tictac` <br>
**Docker <a href="https://hub.docker.com/repository/docker/meghshukla/tictac/"><img alt="Docker" src="https://img.shields.io/badge/Image-tictac-%232496ED?logo=docker&logoColor=white"></a>**: We provide a Docker image which is pre-installed with all required packages. We recommend using this image to ensure reproducibility of our results. Using this image requires setting up Docker on Ubuntu: [Docker](https://docs.docker.com/engine/install/ubuntu/#installation-methods). Once installed, we can use the provided `docker-compose.yaml` file to start our environment with the following command: `docker-compose run --rm tictac` <br>

**PIP**: In case using Docker is not possible, we provide a `requirements.txt` file containing a list of all the packages which can be installed with `pip`. We recommend setting up a new virtual environment ([link](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/)) and install the packages using: `pip install -r requirements.txt`


## Organization <a name="organization"></a>

The repository contains four main folders corresponding to the four experiments: `Univariate, Multivariate, UCI`and `HumanPose`. While the first three are self contained, HumanPose requires us to download images corresponding to the [MPII Dataset](https://datasets.d2.mpi-inf.mpg.de/andriluka14cvpr/mpii_human_pose_v1.tar.gz), [LSP Dataset](http://sam.johnson.io/research/lsp.html) and [LSPET Dataset](http://sam.johnson.io/research/lspet.html). For each of these datasets copy-paste all the images `*.jpg` into `HumanPose/data/{mpii OR lsp OR lspet}/images/`. Within `HumanPose`, we have a separate folder `cached`, which holds the generated file `mpii_cache_imgs_{True/False}.npy`. This file stores the post-processed MPII dataset to avoid redundancy every time the code is run.
Running `python main.py` in the `code` folder executes the code, with configurations specified in `configuration.yml`


## Code Execution <a name="execution"></a>
Expand All @@ -44,6 +42,23 @@ Stopping a container once the code execution is complete can be done using:
1. `docker ps`: List running containers
2. `docker stop <container id>`

## Acknowledgement
## Acknowledgement <a name="acknowledgement"></a>

We thank `https://github.com/jaehyunnn/ViTPose_pytorch` for their implementation of ViTPose which was easily customizable.
We also borrow [code](https://github.com/meghshukla/ActiveLearningForHumanPose) from the Active Learning for Human Pose library.

## Citation <a name="citation"></a>

If you find this work useful, please consider starring this repository and citing this work!

https://github.com/jaehyunnn/ViTPose_pytorch
```
@InProceedings{shukla2024tictac,
title = {TIC-TAC: A Framework for Improved Covariance Estimation in Deep Heteroscedastic Regression},
author = {Shukla, Megh and Salzmann, Mathieu and Alahi, Alexandre},
booktitle = {Proceedings of the 41th International Conference on Machine Learning},
year = {2024},
series = {Proceedings of Machine Learning Research},
month = {21--27 Jul},
publisher = {PMLR}
}
```
2 changes: 1 addition & 1 deletion UCI/UCI.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@

# Configuration
trials = 10
dataset_uci = 'concrete'
dataset_uci = 'naval'
# Possible options: red_wine, white_wine, energy, concrete, power, air, naval, electrical
# abalone, gas_turbine, appliances, parkinson

Expand Down
6 changes: 3 additions & 3 deletions Univariate/univariate.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@

###### Configuration ######
# Sinusoidal Configuration
varying_amplitude = False
invert_varying = None
varying_amplitude = True
invert_varying = False
frequency = 2 * np.pi * 1

# Create experiment folder
experiment_name = 'Results/VaryingAmplitude_{}_and_InvertVarying_{}'.format(
experiment_name = 'ICML_Results/VaryingAmplitude_{}_and_InvertVarying_{}'.format(
varying_amplitude, invert_varying)


Expand Down

0 comments on commit 33f8a48

Please sign in to comment.