-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Train from logical error #66
Comments
I can't reproduce this on the latest revision.
|
I didn't make any changes except specify the epoch to start the loading from. I have attached a log file specifying the train and load from commands, which remain the same except me specifying the load from file. |
Something is not right. According to your log file, you always run the same command:
Is it the case?
|
I ran it again from the beginning again after you said it was strange. Attached is the log file for that. Also attached is the |
It seems that AdaGrad does not play nicely with the Also, please don't set your option within the code. It is error prone and harder for whoever might assist you to know what you are doing. |
Will remember not to inline changes from now. |
I am trying to resume training from checkpoint file and even though it says loaded model, the perplexity restarts at weight initialization level and the accuracy of translation when I use evaluate.lua also seems to indicate that the model is simply reinitializing the vectors instead of loading from checkpoint.
Is this an issue with the API? What am I doing wrong?
The text was updated successfully, but these errors were encountered: