Skip to content
/ LCFCN Public
forked from ServiceNow/LCFCN

ECCV 2018 - Where are the Blobs: Counting by Localization with Point Supervision

License

Notifications You must be signed in to change notification settings

DnkNju/LCFCN

This branch is 8 commits behind ServiceNow/LCFCN:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

0df3a8c · May 11, 2021

History

82 Commits
May 11, 2021
May 11, 2020
Mar 15, 2021
Mar 15, 2021
May 11, 2020
Oct 19, 2018
May 11, 2021
Aug 1, 2020
Oct 1, 2020
May 11, 2020
May 22, 2020

Repository files navigation

LCFCN - ECCV 2018 (Try in a Colab)

Where are the Blobs: Counting by Localization with Point Supervision

[Paper][Video]

Make the segmentation model learn to count and localize objects by adding a single line of code. Instead of applying the cross-entropy loss on dense per-pixel labels, apply the lcfcn loss on point-level annotations.

Usage

pip install git+https://github.com/ElementAI/LCFCN
from lcfcn import lcfcn_loss

# compute an CxHxW logits mask using any segmentation model
logits = seg_model.forward(images)

# compute loss given 'points' as HxW mask (1 pixel label per object)
loss = lcfcn_loss.compute_loss(points=points, probs=logits.sigmoid())

loss.backward()

Predicted Object Locations

Experiments

1. Install dependencies

pip install -r requirements.txt

This command installs pydicom and the Haven library which helps in managing the experiments.

2. Download Datasets

3. Train and Validate

python trainval.py -e trancos -d <datadir> -sb <savedir_base> -r 1
  • <datadir> is where the dataset is located.
  • <savedir_base> is where the experiment weights and results will be saved.
  • -e trancos specifies the trancos training hyper-parameters defined in exp_configs.py.

4. View Results

3.1 Launch Jupyter from terminal

> jupyter nbextension enable --py widgetsnbextension --sys-prefix
> jupyter notebook

3.2 Run the following from a Jupyter cell

from haven import haven_jupyter as hj
from haven import haven_results as hr

try:
    %load_ext google.colab.data_table
except:
    pass

# path to where the experiments got saved
savedir_base = <savedir_base>

# filter exps
filterby_list = None
# get experiments
rm = hr.ResultManager(savedir_base=savedir_base, 
                      filterby_list=filterby_list, 
                      verbose=0)
# dashboard variables
title_list = ['dataset', 'model']
y_metrics = ['val_mae']

# launch dashboard
hj.get_dashboard(rm, vars(), wide_display=True)

This script outputs the following dashboard

Citation

If you find the code useful for your research, please cite:

@inproceedings{laradji2018blobs,
  title={Where are the blobs: Counting by localization with point supervision},
  author={Laradji, Issam H and Rostamzadeh, Negar and Pinheiro, Pedro O and Vazquez, David and Schmidt, Mark},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={547--562},
  year={2018}
}

About

ECCV 2018 - Where are the Blobs: Counting by Localization with Point Supervision

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%