You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some reason I'm hovering around ~0.15 accuracy during the training for both the notebook and the py file. Also, in the final layer, shouldn't it be full4 not full3? To note, I am using python 2.7, so have had to slightly change the import for urlretrieve and also remove the extra param of "latin1" from the pickle opens. Could this have any affect? Further more the py can't find the "helper" module (no easy fix even when copying the other functions in the notebook in). Thanks.
If you have any idea why my accuracy is so low in training any response would be appreacited.
The text was updated successfully, but these errors were encountered:
I did but it was so long ago I've forgotten I'm afraid. If i remember
correctly I think it wasn't a big problem, and just a small bug/change
somewhere. I'm sorry I can't be more helpful.
I think I've found the issue. The cost should be tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=model, labels=y)) rather than tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)). That seems to have got the accuracy to a good level for me.
For some reason I'm hovering around ~0.15 accuracy during the training for both the notebook and the py file. Also, in the final layer, shouldn't it be full4 not full3? To note, I am using python 2.7, so have had to slightly change the import for urlretrieve and also remove the extra param of "latin1" from the pickle opens. Could this have any affect? Further more the py can't find the "helper" module (no easy fix even when copying the other functions in the notebook in). Thanks.
If you have any idea why my accuracy is so low in training any response would be appreacited.
The text was updated successfully, but these errors were encountered: