You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the README file (and also in the paper at arXiv), it is stated that one can obtain a 68% (test(?)) accuracy using an MLP.
However, the output in the relevant notebook included in the repository seems to indicate a test accuracy of ~65.7%.
Moreover, in my own tests with the same architecture that is used in the relevant notebook (i.e., two hidden layers with 100 neurons each), I only managed to achieve around 62% accuracy.
Or were the 68% obtained with a different network architecture (number of layers, neurons per layer, regularization, etc.) than what is used in the notebook that is part of the repository?
Any hints would be appreciated.
The text was updated successfully, but these errors were encountered:
Thanks for providing this nice dataset!
In the README file (and also in the paper at arXiv), it is stated that one can obtain a 68% (test(?)) accuracy using an MLP.
However, the output in the relevant notebook included in the repository seems to indicate a test accuracy of ~65.7%.
Moreover, in my own tests with the same architecture that is used in the relevant notebook (i.e., two hidden layers with 100 neurons each), I only managed to achieve around 62% accuracy.
Or were the 68% obtained with a different network architecture (number of layers, neurons per layer, regularization, etc.) than what is used in the notebook that is part of the repository?
Any hints would be appreciated.
The text was updated successfully, but these errors were encountered: