Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a bug in task/sseg/func.py metrics? #8

Open
HHuiwen opened this issue Aug 4, 2021 · 1 comment
Open

Is there a bug in task/sseg/func.py metrics? #8

HHuiwen opened this issue Aug 4, 2021 · 1 comment

Comments

@HHuiwen
Copy link

HHuiwen commented Aug 4, 2021

Hi, ZHKKKe, Thank you for your excellent code.

I found a suspected bug in task/sseg/func.py.

In the function metrics, you reset all meters named acc_str/acc_class_str/mIoU_str/fwIoU_str.
if meters.has_key(acc_str): meters.reset(acc_str) if meters.has_key(acc_class_str): meters.reset(acc_class_str) if meters.has_key(mIoU_str): meters.reset(mIoU_str) if meters.has_key(fwIoU_str): meters.reset(fwIoU_str)
When I test your pre-trained model deeplabv2_pascalvoc_1-8_suponly.ckpt, I found the Validation metrics logging the whole confusion matrix. Shouldn‘t we count the single image acc/mIoU independently?

I'm not sure whether my speculation is right, could you help me?

@ZHKKKe
Copy link
Owner

ZHKKKe commented Aug 11, 2021

Hi, thanks for your attention. Sorry for late response.

The metric calculation in semantic segmentation can be a bit difficult to understand.
If you dive into the code, you will find that historical information is stored in confusion_matrix. Therefore, we should reset the metrics and calculate new metrics from confusion_matrix in each validation iteration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants