You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As we can see, the line 280 and the line 283 of the file 'methods/backbone.py' mean that we use the Feature Wise Transformation module in MAML not the metric-based models. But this contradicts the paper, doesn't it?
The text was updated successfully, but these errors were encountered:
Sorry for the confusion. The naming "maml" here indicates that we want to enable the gradient back-propagation in the learning-to-generalize (or learning-to-learn) training process.
Sorry for the confusion. The naming "maml" here indicates that we want to enable the gradient back-propagation in the learning-to-generalize (or learning-to-learn) training process.
So why we use the Feature Wise Transformation module when we want to enable the gradient back-propagation in the learning-to-generalize (or learning-to-learn) training process? Is this back-propagation process similar to calculating the gradient along the inner loop in MAML?
As we can see, the line 280 and the line 283 of the file 'methods/backbone.py' mean that we use the Feature Wise Transformation module in MAML not the metric-based models. But this contradicts the paper, doesn't it?
The text was updated successfully, but these errors were encountered: