This is the repository containing the code and best model for the paper Improving Re-Identification by Estimating and Utilizing Diverse Uncertainty Types for Embeddings published in Algorithms.
Below, you will find instructions for evaluating our model and reproducing our results.
If you use our work, please cite it as follows:
@article{eisenbach2024improving,
title={Improving Re-Identification by Estimating and Utilizing Diverse Uncertainty Types for Embeddings},
author={Eisenbach, Markus and Gebhardt, Andreas and Aganian, Dustin and Gross, Horst-Michael},
journal={Algorithms},
volume={17},
number={10},
pages={1--40},
year={2024},
publisher={MDPI}
}
The pipeline to reproduce our results consists of three steps:
- Train the model.
- Generate a file containing the model outputs for each query and gallery image.
- Compute performance metrics based on that file.
Depending on how deep you want to get into the reproduction process, you can enter at any step with the material we provided.
First, you will have to set up the conda environment.
We were using conda version 23.7.4
.
Run the following commands:
conda env create --file conda_env/UE.yml
conda activate UE
Please note that we also provide other conda environments that are described here.
Next, make sure you have downloaded the Market-1501 dataset and have the path ready.
Should you wish to train a model from scratch, follow these steps:
- Update the paths in
fastreid-UE/tools/publication/training.py
as needed. - Run the following commands:
cd fastreid-UE
python tools/publication/training.py
By default, the model just trains and does no evaluations until it is finished. Should you wish to change this behaviour, edit volatile_config()
as needed.
If you trained the model from scratch, it has already generated the model outputs file and you can move on with this step.
In case you preferred to skip to train the model yourself, we provide the checkpoint file for a trained model that can be downloaded and stored as trained_model/model_best.pth
.
This is the best of our trained models when using the embedding refinement method that produces the best result on average (
If you want to generate the model outputs file based on an existing checkpoint file (either from your training or downloaded), follow these steps
- Update the paths in
fastreid-UE/tools/publication/training.py
as needed. - Run the following commands:
cd fastreid-UE
python tools/publication/training.py --eval-only
With the model outputs file, you can start the evaluation.
In case you preferred to skip the previous steps, we also provide a model outputs file that can be downloaded and strored as trained_model/raw_model_outputs.json
.
If you are interested only in using a trained model, please follow step 2.
In order to evaluate the model, follow these steps:
- Update the paths in
fastreid-UE/tools/publication/evaluation.py
as needed. - Run the following commands:
cd fastreid-UE
python tools/publication/evaluation.py
The results are shown in the console. For the provided model, we get the following outputs:
Variant | mAP [%] | rank-1 [%] |
---|---|---|
UAL | 86.9965 | 94.5962 |
UBER: |
87.4999 | 94.6259 |
UBER: |
87.4747 | 94.5962 |
UBER: |
87.5022 | 94.6853 |
UBER: |
87.5006 | 94.5071 |
We have selected a few variants of our approach that are evaluated by default. Should you wish to examine other variants, edit the code as needed.
We provide the sets of distractor images we have labeled for our experiments.
The file distractor_sets.json
contains a JSON dict that maps the set ID also used in the paper (D1 - D4) to the filenames of the corresponding images in the bounding_box_test
partition of Market-1501.
The sets represent increasing degrees of out-of-distribution-ness compared to the training data.
NOTE: The annotations are bound to contain labeling errors. The distractor set also contains ambiguous images.
For further information useful for development based on this repo, see Further Details.
This repository is based on FastReID.
Adaptations of UAL, DistributionNet, and PFE are contained here.
This software is licensed under the Apache 2.0 license.