Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get high test prediction but the prediction on images are wrong #4

Open
chihchiehchen opened this issue Nov 25, 2018 · 18 comments
Open

Comments

@chihchiehchen
Copy link

Hello Sir,

Thanks for sharing your code. I try to train a model based on your code, however something strange happened: when I run block [60] of CIFAR10_image_classification.ipynb , I can get test accuracy around 74%, but the predictions of images below are very wrong. Could you do me a favor to guide me how to solve this issue?

In any case thanks for your patient and help.

Best,
Chih-Chieh

@deep-diver
Copy link
Owner

Hi @chihchiehchen

First of all, I need to see what the images look like.
If the images are not any similar to the CIFAR-10 dataset, the prediction accuracy will probably be very low.

@chihchiehchen
Copy link
Author

Hello,

I used Cifar-10 test set. I mean, I did not modify anything but follow your code. When I run block [60] of CIFAR10_image_classification.ipynb , I get 74% (compared with 58 % shown in the ipynb file), but for softmax prediction of images ( mainly the function display_image_predictions ), the prediction is always strange (as the pictures shown below, it is never correct). I am not sure if I misunderstood anything or there is something required to be modified.

@deep-diver
Copy link
Owner

Aha I get your point.

Did you run the notebook yourself? or are you just reading it through?
I made some changes to the last one with saved checkpoint.

Let me re-run the notebook, and I will let you know soon

@chihchiehchen
Copy link
Author

Hello,

I ran the notebook from the top to the bottom. In any case thanks a lot!

@deep-diver
Copy link
Owner

I have just re-run the notebook after cloning the repo entirely.

And I got Testing Accuracy of 0.728...

Let me think what could go wrong in your case

@chihchiehchen
Copy link
Author

Hello,

How about the softmax prediction below? Is it correct? In my case the test accuracy is also high, but the problem is that the softmax prediction (of random samples from the last few lines of test_model())are very strange.

In any case thanks for help.

@deep-diver
Copy link
Owner

ok. that might be some kind of indexing problem when displaying with matplotlib.
I will look into it and fix the thing. But the model and its behaviour is ok I guess.

@chihchiehchen
Copy link
Author

Hello,

I also hope so (but cannot figure out where is the problem).

In any case thanks for taking your time on this, I also learn something from the discussion.

@deep-diver
Copy link
Owner

ok.

the thing is the name on top of the picture is the ground truth label, not the predicted one.
and the bar graph on the right hand side is the predicted result.

so it could look a bit strange. However, you can simply compare the ground truth and the predicted result side by side.

@chihchiehchen
Copy link
Author

Hello,

But I think the problem is that since the test accuracy is high, in some sense the label and the predicted result shall coincide( am I wrong? ), but I try several times and the predicted results are always different from the labels.

@deep-diver
Copy link
Owner

Is it? In my case, I got 4 out of 5 correct.
If you don't get the right result all the time, check if your results are any closer to the 2nd place

@chihchiehchen
Copy link
Author

Hello,

Sorry but I still get bad results ( usually the label are not in the top-3 predicted classes ) on softmax prediction. Maybe I really did something wrong and need some time to figure it out.

In any case thanks for sharing the idea and giving me some suggestions, maybe I can provide some feedback once I figured out what I did wrong.

Thanks a lot.

Chih-Chieh

@panovr
Copy link

panovr commented Nov 27, 2018

This is my jupyter notebook softmax prediction:
softmax

It seems that the prediction was wrong.

@megalinier
Copy link

Hello,

I do get the same problem : running your program without changing anything, I get a high accuracy, but the predictions on the random samples are all different from the true labels. Have you found out what should be modified ?

Thank you for your answer :)

@megalinier
Copy link

Hey,

I think the code for the printing of the random samples has some mistake in it, indeed, but the rest of the algorithm is good =)

If you want to print some random examples with their predicted labels, here's a simple code you can use (inside the " with tf.Session(graph=loaded_graph) as sess: " statement):

    for num_samples in range(0,n_samples):
        num_test = random.randint(0,10000)
        test_feat = test_features[num_test,:,:,:]
        test_feat_reshape = test_feat.reshape(1,32,32,3)
        test_label = test_labels[num_test,:].reshape(1,10)
        label_ids = label_binarizer.inverse_transform(np.array(test_label))

        test_prediction_ind = sess.run(
           tf.math.argmax(tf.nn.softmax(loaded_logits),axis=1),
           feed_dict={loaded_x: test_feat_reshape, loaded_y: test_label, loaded_keep_prob: 1.0}) 
        
        plt.imshow(test_feat) 
        plt.title('True label : ' + list_label_names[label_ids[0]] + \
                  ' - Predicted label :' + list_label_names[test_prediction_ind[0]])
        plt.show()

@deep-diver
Copy link
Owner

Thanks. Can you contribute your code?

@rxk900
Copy link

rxk900 commented Feb 13, 2019

If this is what I have found then:
axies[image_i][1].set_yticklabels(pred_names[::-1])
needs to be
axies[image_i][1].set_yticklabels(pred_names[::])

@rxk900
Copy link

rxk900 commented Mar 10, 2019

OK, this is odd. I modified the notebook to use my own images:
image

so going back to debugging that notebook I get the original:
axies[image_i][1].set_yticklabels(pred_names[::-1])
Now I have two notebooks and not sure why they differ:
image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants