-
-
Notifications
You must be signed in to change notification settings - Fork 391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chapter 5: can't reproduce "often end up with more than 95% accuracy in less than 10 epochs" #17
Comments
Data from a longer run. Note this timed out also, so doesn't end cleanly. But you can clearly see the fit measurements. Each full data set run just under 2h for 10 epochs. |
that's pretty strange. last time I evaluated this, I certainly ended up with 95% plus, most of the time. I mean, not that it really matters, the lesson is that the algo learns. But I get your frustration, sorry for the inconvenience. My suspicion is that I used a different learning rate altogether for my experiments. |
I met the same problem. What can I do to solve it? |
Chapter 5 reads:
However, I don't share this experience.
I have captured some scenarios in a branch.
You can run as:
The program downsamples mnist train and test data, train, repeats, and then computes summary statistics.
The program uses the unittest framework for regression testing, and it also emits json documents between scenarios for offline analysis.
The program depends on the incomplete branch I made for #12.
The typical fitting accuracy I get for the full mnist dataset is around 57%.
For purposes of helping your reproduction, I am holding back all inessential changes, including fix for performance related #11.
I tried to run all the configurations listed in the test, but unfortunately my job just ran past the 4h timeout I assigned it, so I'll have to up it and try again. I will attach the 4 instance hours worth of data I have.
This is my configuration:
The text was updated successfully, but these errors were encountered: