You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the positive and negative weights are normalized by num_examples in anchor_target_layer_fpn.py. num_examples is calculated based on index i outside of i's batch loop, thus the only number of examples that matter is the last batch, all weights for every example in the batch will be normalized by the last example in the batch, rather than on a example by example basis OR by a num_example that considers the entire batch.
for i in range(batch_size):
... ...
offset = torch.arange(0, batch_size)*gt_boxes.size(1)
argmax_overlaps = argmax_overlaps + offset.view(batch_size, 1).type_as(argmax_overlaps)
bbox_targets = _compute_targets_batch(anchors, gt_boxes.view(-1,5 [argmax_overlaps.view(-1), :].view(batch_size, -1, 5))
# use a single value instead of 4 values for easy index.
bbox_inside_weights[labels==1] = cfg.TRAIN.RPN_BBOX_INSIDE_WEIGHTS[0]
if cfg.TRAIN.RPN_POSITIVE_WEIGHT < 0:
#note that i = batch_size-1
num_examples = torch.sum(labels[i] >= 0)
positive_weights = 1.0 / num_examples
negative_weights = 1.0 / num_examples
else:
assert ((cfg.TRAIN.RPN_POSITIVE_WEIGHT > 0) &
(cfg.TRAIN.RPN_POSITIVE_WEIGHT < 1))
bbox_outside_weights[labels == 1] = positive_weights
bbox_outside_weights[labels == 0] = negative_weights
The text was updated successfully, but these errors were encountered:
the positive and negative weights are normalized by num_examples in anchor_target_layer_fpn.py. num_examples is calculated based on index i outside of i's batch loop, thus the only number of examples that matter is the last batch, all weights for every example in the batch will be normalized by the last example in the batch, rather than on a example by example basis OR by a num_example that considers the entire batch.
The text was updated successfully, but these errors were encountered: