You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am trying to reproduce the siamfc net with pytorch.
I have built the whole net, but I encounter some difficulty in building loss function.
I always get a loss around 0.693 with the formula in paper and it seems that no matter what the score you get, you will get 0.693 in the end. For example:
import numpy as np
score = np.random.randint(-5,5,size=(1,5,5))
label = np.ones((1,5,5))
for i in range(1, 4):
for j in range(1,4):
label[0, i, j] = -1
weight = np.ones((1,5,5))
indexP = np.where(label == 1)
indexF = np.where(label == -1)
weight[indexP] = 0.5 / indexP[0].shape[0]
weight[indexF] = 0.5 / indexF[0].shape[0]
label = weight * label
print(np.sum(np.log(1+np.exp(-label*score)))/25)
you will get a value of 0.693. You can try any other score or label, initializing it with random number and you will get 0.693 again.
I wonder is there something not mentioned in paper?
The text was updated successfully, but these errors were encountered:
I do not think that this is an issue with this repository.
Without looking at your code, I can tell you that 0.693 is -ln(0.5), which is the loss that you expect to get with random initialization in a balanced 2-class classification problem. Your code seems to be computing the loss for a random score map? This is like averaging over 25 independent classification problems, so it's not that surprising that the loss is roughly constant across samples.
Hi,
I am trying to reproduce the siamfc net with pytorch.
I have built the whole net, but I encounter some difficulty in building loss function.
I always get a loss around 0.693 with the formula in paper and it seems that no matter what the score you get, you will get 0.693 in the end. For example:
you will get a value of 0.693. You can try any other score or label, initializing it with random number and you will get 0.693 again.
I wonder is there something not mentioned in paper?
The text was updated successfully, but these errors were encountered: