You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your interesting work. I have a question regarding meta-training step in PyTorch implementation. In line 167-169 of 'meta-py' the parameters of the classifier are updated according to the equation #3 in paper using the loss for training samples. Also, the loss for validation samples (according to equation #4 and #5 in the paper) should be back-propagated to do the second loop of meta-learning. However, in the released PyTorch implementation, I can see only the forward pass using validation samples, and the loss is not back-propagated to optimise SS parameters & theta (equation #4 and #5). I would really appreciate it if you could clarify this for me. Thank you very much.
The text was updated successfully, but these errors were encountered:
Hi, thanks for your interesting work. I have a question regarding meta-training step in PyTorch implementation. In line 167-169 of 'meta-py' the parameters of the classifier are updated according to the equation #3 in paper using the loss for training samples. Also, the loss for validation samples (according to equation #4 and #5 in the paper) should be back-propagated to do the second loop of meta-learning. However, in the released PyTorch implementation, I can see only the forward pass using validation samples, and the loss is not back-propagated to optimise SS parameters & theta (equation #4 and #5). I would really appreciate it if you could clarify this for me. Thank you very much.
The text was updated successfully, but these errors were encountered: