You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thank you for your incredible work. I have a question regarding the "R" metric in the performance table. Does it refer to Recall, and if so, does [email protected] indicate Recall at 50% data or at an IoU threshold of 0.5? These terms aren't explicitly mentioned in the paper, and I'm a bit confused. Could you please clarify?
The text was updated successfully, but these errors were encountered:
Thanks for your question! The "R" metric you are referring to stands for Recall. Specifically, [email protected] indicates the Recall at a false-positive rate of 0.5 per image. This is a common metric used in the context of FROC (Free-Response Receiver Operating Characteristic), which is often applied in medical imaging tasks.
To clarify:
Recall (R) is a measure of how well the model identifies true positives,
[email protected] refers to the Recall value at a false positive rate of 0.5 per image. Essentially, it's the Recall at a threshold where the model is allowed an average of 0.5 false positives per image.
This is different from the common IoU (Intersection over Union) threshold used in object detection. The FROC analysis focuses on the balance between detecting true positives and the acceptable rate of false positives, which is particularly useful in fields like medical imaging where detecting as many true positives as possible is critical.
Hi,
Thank you for your incredible work. I have a question regarding the "R" metric in the performance table. Does it refer to Recall, and if so, does [email protected] indicate Recall at 50% data or at an IoU threshold of 0.5? These terms aren't explicitly mentioned in the paper, and I'm a bit confused. Could you please clarify?
The text was updated successfully, but these errors were encountered: