Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typo in metrics.py #119

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions annotator/uniformer/mmseg/core/evaluation/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ def intersect_and_union(pred_label,
ignore_index (int): Index that will be ignored in evaluation.
label_map (dict): Mapping old labels to new labels. The parameter will
work only when label is str. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. The parameter will
reduce_zero_label (bool): Whether ignore zero label. The parameter will
work only when label is str. Default: False.

Returns:
Expand Down Expand Up @@ -101,7 +101,7 @@ def total_intersect_and_union(results,
num_classes (int): Number of categories.
ignore_index (int): Index that will be ignored in evaluation.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.

Returns:
ndarray: The intersection of prediction and ground truth histogram
Expand Down Expand Up @@ -149,7 +149,7 @@ def mean_iou(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.

Returns:
dict[str, float | ndarray]:
Expand Down Expand Up @@ -188,7 +188,7 @@ def mean_dice(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.

Returns:
dict[str, float | ndarray]: Default metrics.
Expand Down Expand Up @@ -229,7 +229,7 @@ def mean_fscore(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
beta (int): Determines the weight of recall in the combined score.
Default: False.

Expand Down Expand Up @@ -275,7 +275,7 @@ def eval_metrics(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
float: Overall accuracy on all images.
ndarray: Per category accuracy, shape (num_classes, ).
Expand Down