Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get 68% accuracy using an MLP? #17

Open
Phoemuex opened this issue Sep 30, 2024 · 0 comments
Open

How to get 68% accuracy using an MLP? #17

Phoemuex opened this issue Sep 30, 2024 · 0 comments

Comments

@Phoemuex
Copy link

Thanks for providing this nice dataset!

In the README file (and also in the paper at arXiv), it is stated that one can obtain a 68% (test(?)) accuracy using an MLP.

However, the output in the relevant notebook included in the repository seems to indicate a test accuracy of ~65.7%.

Moreover, in my own tests with the same architecture that is used in the relevant notebook (i.e., two hidden layers with 100 neurons each), I only managed to achieve around 62% accuracy.

Or were the 68% obtained with a different network architecture (number of layers, neurons per layer, regularization, etc.) than what is used in the notebook that is part of the repository?

Any hints would be appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant