You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've implemented a small binary text classification task in problems.py and util.py. I'm using a small MLP similar to the MNIST model. When I run the model with a regular optimizer, the loss on the training dataset goes down easily.
However, the meta-optimizer fails to minimize the loss; after 10k epochs, the loss is still as if the model was random. Do you have any insight or tips on how I could debug the meta-learner?
Thanks in advance. I really appreciate your help.
The text was updated successfully, but these errors were encountered:
I've implemented a small binary text classification task in
problems.py
andutil.py
. I'm using a small MLP similar to the MNIST model. When I run the model with a regular optimizer, the loss on the training dataset goes down easily.However, the meta-optimizer fails to minimize the loss; after 10k epochs, the loss is still as if the model was random. Do you have any insight or tips on how I could debug the meta-learner?
Thanks in advance. I really appreciate your help.
The text was updated successfully, but these errors were encountered: