-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get high test prediction but the prediction on images are wrong #4
Comments
First of all, I need to see what the images look like. |
Hello, I used Cifar-10 test set. I mean, I did not modify anything but follow your code. When I run block [60] of CIFAR10_image_classification.ipynb , I get 74% (compared with 58 % shown in the ipynb file), but for softmax prediction of images ( mainly the function display_image_predictions ), the prediction is always strange (as the pictures shown below, it is never correct). I am not sure if I misunderstood anything or there is something required to be modified. |
Aha I get your point. Did you run the notebook yourself? or are you just reading it through? Let me re-run the notebook, and I will let you know soon |
Hello, I ran the notebook from the top to the bottom. In any case thanks a lot! |
I have just re-run the notebook after cloning the repo entirely. And I got Testing Accuracy of 0.728... Let me think what could go wrong in your case |
Hello, How about the softmax prediction below? Is it correct? In my case the test accuracy is also high, but the problem is that the softmax prediction (of random samples from the last few lines of test_model())are very strange. In any case thanks for help. |
ok. that might be some kind of indexing problem when displaying with matplotlib. |
Hello, I also hope so (but cannot figure out where is the problem). In any case thanks for taking your time on this, I also learn something from the discussion. |
ok. the thing is the name on top of the picture is the ground truth label, not the predicted one. so it could look a bit strange. However, you can simply compare the ground truth and the predicted result side by side. |
Hello, But I think the problem is that since the test accuracy is high, in some sense the label and the predicted result shall coincide( am I wrong? ), but I try several times and the predicted results are always different from the labels. |
Is it? In my case, I got 4 out of 5 correct. |
Hello, Sorry but I still get bad results ( usually the label are not in the top-3 predicted classes ) on softmax prediction. Maybe I really did something wrong and need some time to figure it out. In any case thanks for sharing the idea and giving me some suggestions, maybe I can provide some feedback once I figured out what I did wrong. Thanks a lot. Chih-Chieh |
Hello, I do get the same problem : running your program without changing anything, I get a high accuracy, but the predictions on the random samples are all different from the true labels. Have you found out what should be modified ? Thank you for your answer :) |
Hey, I think the code for the printing of the random samples has some mistake in it, indeed, but the rest of the algorithm is good =) If you want to print some random examples with their predicted labels, here's a simple code you can use (inside the " with tf.Session(graph=loaded_graph) as sess: " statement):
|
Thanks. Can you contribute your code? |
If this is what I have found then: |
Hello Sir,
Thanks for sharing your code. I try to train a model based on your code, however something strange happened: when I run block [60] of CIFAR10_image_classification.ipynb , I can get test accuracy around 74%, but the predictions of images below are very wrong. Could you do me a favor to guide me how to solve this issue?
In any case thanks for your patient and help.
Best,
Chih-Chieh
The text was updated successfully, but these errors were encountered: