Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

validation accuracy #4

Open
anguoyang opened this issue Aug 1, 2017 · 7 comments
Open

validation accuracy #4

anguoyang opened this issue Aug 1, 2017 · 7 comments

Comments

@anguoyang
Copy link

Hi, all, thank you for your great contribution.
I have trained and tested the program successfully, and the validation is very good (always 99.5%, even if with only 100 steps), but when I run the label_image.py, the result is very bad: almost all label-names(in labels.txt) have the same confidence!
could you please help me on it? thank you

@samhains
Copy link

samhains commented Aug 8, 2017

I'm having the same issue, not really sure what is happening here. Did you have any luck figuring this out @anguoyang ?

@BartyzalRadek
Copy link
Owner

Hi, try training the net on for example 4 classes with 500 images in each of the classes, try not to overfit your network. It should have no trouble training and ouputting different classes for each test image. If this works, then you either have a very unbalanced dataset or you are feeding the network wrong training data.

Try checking how you are generating the label files etc.

@fqeqiq168
Copy link

i think the problem is the function to calculate the validation accuracy. i.e one image the correct lable is [1,1,0,0,0,0,0] but the net gives [1,0,0,0,0,0,0] the accuracy is 6/7。enven if the net gives all zero [0,0,0,0,0,0,0] the accuracy is 5/7。

@w5688414
Copy link

2017-12-23 20:26:43.655123: Step 1970: Validation accuracy = 99.6%
2017-12-23 20:26:45.049279: Step 1980: Train accuracy = 99.6%
2017-12-23 20:26:45.049366: Step 1980: Cross entropy = 0.218476
2017-12-23 20:26:45.184791: Step 1980: Validation accuracy = 99.6%
2017-12-23 20:26:46.561235: Step 1990: Train accuracy = 99.6%
2017-12-23 20:26:46.561318: Step 1990: Cross entropy = 0.215815
2017-12-23 20:26:46.694530: Step 1990: Validation accuracy = 99.6%
2017-12-23 20:26:48.033845: Step 1999: Train accuracy = 99.6%
2017-12-23 20:26:48.033929: Step 1999: Cross entropy = 0.212512
2017-12-23 20:26:48.170983: Step 1999: Validation accuracy = 99.6%
Final test accuracy = 99.6%
Converted 2 variables to const ops.
I also get the high accuracy, but in fact, I got a low accuracy, Can anyone help me to improve the the codes

@dav1nci
Copy link

dav1nci commented Jan 25, 2018

@w5688414 @anguoyang @samhains
I am using this code

def evaluate_multilabel(y_pred, y_true):
    acc = []
    for y_pred_tmp, y_true_tmp in zip(y_pred, y_true):
        real_ = np.nonzero(y_true_tmp)[0].tolist()
        pred_ = np.nonzero(y_pred_tmp)[0].tolist()
        if len(real) == 0:
            #means 0 right answers
            acc.append(0.0)
            continue
        acc.append(len(set(real_).intersection(set(pred_))) / len(real_))
    return(np.array(acc).mean())

# where y_pred is
pred = tf.round(tf.nn.sigmoid(logits))

to get accuracy for multilabel classification task. But I wrote it not in the tensorflow style, but based on numpy ndarrays. The main idea is to find how many of right 1-s e.g.
[1, 0, 0, 1, 0, 1]
are in the algorithm's prediction e.g.
[1, 0, 1, 0, 0, 1]

Hope this helps

If someone has better solution you may share it with us

@haoxi911
Copy link

@fqeqiq168 Would you please take a look at @dav1nci 's solution, do you have a better one to calculate the accuracy?

@civilman628
Copy link

I have the same issue. the prediction on the test image is wrong completely. but both training and val accuracy is very high. this is not normal. I have about 200+ labels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants