Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mean Average Precision #2130

Draft
wants to merge 40 commits into
base: master
Choose a base branch
from
Draft

Mean Average Precision #2130

wants to merge 40 commits into from

Conversation

gucifer
Copy link
Contributor

@gucifer gucifer commented Aug 1, 2021

Fixes #{520}

Description:
Created a COCO Style Implementation for mAP metric. Both of them are calculated at once since both require similar calculation and have similar use case. The overall implementation is complicated so any advice to simplify it is appreciated.
Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

@gucifer gucifer marked this pull request as draft August 1, 2021 17:14
@github-actions github-actions bot added the module: metrics Metrics module label Aug 1, 2021
@sdesrozis
Copy link
Contributor

We already discussed a plan one week ago :

(1) remove mAR, use one max_det
(2) do a comparison with the original pycocotools implementation
(3) do an api based on preds/gts of images, it means mostly move pieces of code
(4) optimize the code to save space/cpu, it means change the data structures

The first thing you had to do was having a test, yet I don't see any test.

@sdesrozis
Copy link
Contributor

@gucifer It misses tests, from unitary to ddp.

ignite/metrics/mAP.py Outdated Show resolved Hide resolved
ignite/metrics/mAP.py Outdated Show resolved Hide resolved
ignite/metrics/mAP.py Outdated Show resolved Hide resolved
ignite/metrics/mAP.py Outdated Show resolved Hide resolved
ignite/metrics/mAP.py Outdated Show resolved Hide resolved
ignite/metrics/mAP.py Outdated Show resolved Hide resolved
ignite/metrics/mAP.py Outdated Show resolved Hide resolved
@gucifer gucifer requested a review from sdesrozis August 15, 2021 05:47
__all__ = ["MeanAveragePrecision"]


def _iou(y: torch.Tensor, y_pred: torch.Tensor, crowd: List) -> torch.Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this function tested ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compare it against pycocotools iou function in test?

@sdesrozis sdesrozis marked this pull request as draft August 21, 2021 14:41
y_ignore = y_img["ignore"][y_ind] if "ignore" in y_img else torch.zeros(len(y_ind))
y_area = y_img["area"][y_ind] if "area" in y_img else (y_img["bbox"][:, 2] * y_img["bbox"][:, 3])[y_ind]

ious = _iou(y_bbox, sorted_y_pred_bbox, crowd).to(self._device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t see why y_pred’s boxes should be sorted by score. Moreover, that is not mentioned in the iou method.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iou method is generic, for 'a' y_bbox and 'b' y_pred_bbox it return a matrix of size 'a x b' containing all the respective ious. We are passing it sorted list because in eval_image function we are sorting y_preds according to confidence as well for greedy matching.

Copy link
Contributor

@sdesrozis sdesrozis Aug 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it should be done in the method instead of here ?

@sdesrozis
Copy link
Contributor

I move to draft instead of PR. A lot of work remaining. Maybe it could worth to discuss, implement and test on our side to create a new clean PR. We will see after GSoC.

@reinit__is_reduced
def update(self, outputs: Tuple[Dict, Dict]) -> None:

for output in outputs:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

outputs is defined as a tuple but seems considered as a list.


y_pred_img, y_img = output

if y_img["image_id"] != y_pred_img["image_id"]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A test is required.

if y_img["image_id"] != y_pred_img["image_id"]:
raise ValueError("Ground Truth and Predictions should be for the same image.")

if y_img["image_id"] in self.image_ids:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A test is required

y_id, y_area, y_ignore, y_crowd = y
y_pred_id, y_pred_area, y_pred_score = y_pred

if len(y_id) == 0 and len(y_pred_id) == 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A test is required

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: metrics Metrics module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants