-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
validation accuracy #4
Comments
I'm having the same issue, not really sure what is happening here. Did you have any luck figuring this out @anguoyang ? |
Hi, try training the net on for example 4 classes with 500 images in each of the classes, try not to overfit your network. It should have no trouble training and ouputting different classes for each test image. If this works, then you either have a very unbalanced dataset or you are feeding the network wrong training data. Try checking how you are generating the label files etc. |
i think the problem is the function to calculate the validation accuracy. i.e one image the correct lable is [1,1,0,0,0,0,0] but the net gives [1,0,0,0,0,0,0] the accuracy is 6/7。enven if the net gives all zero [0,0,0,0,0,0,0] the accuracy is 5/7。 |
2017-12-23 20:26:43.655123: Step 1970: Validation accuracy = 99.6% |
@w5688414 @anguoyang @samhains def evaluate_multilabel(y_pred, y_true):
acc = []
for y_pred_tmp, y_true_tmp in zip(y_pred, y_true):
real_ = np.nonzero(y_true_tmp)[0].tolist()
pred_ = np.nonzero(y_pred_tmp)[0].tolist()
if len(real) == 0:
#means 0 right answers
acc.append(0.0)
continue
acc.append(len(set(real_).intersection(set(pred_))) / len(real_))
return(np.array(acc).mean())
# where y_pred is
pred = tf.round(tf.nn.sigmoid(logits)) to get accuracy for multilabel classification task. But I wrote it not in the tensorflow style, but based on numpy ndarrays. The main idea is to find how many of right 1-s e.g. Hope this helps If someone has better solution you may share it with us |
@fqeqiq168 Would you please take a look at @dav1nci 's solution, do you have a better one to calculate the accuracy? |
I have the same issue. the prediction on the test image is wrong completely. but both training and val accuracy is very high. this is not normal. I have about 200+ labels. |
Hi, all, thank you for your great contribution.
I have trained and tested the program successfully, and the validation is very good (always 99.5%, even if with only 100 steps), but when I run the label_image.py, the result is very bad: almost all label-names(in labels.txt) have the same confidence!
could you please help me on it? thank you
The text was updated successfully, but these errors were encountered: