You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I appreciate your contribution, and I've been studying this paper recently, but have a few questions
Question 1:
In the paper,pipeline of proposed few-shot learning method, including three phases:
(a) DNN training on large-scale data
(b) Meta-transfer learning
(c) meta-test
In README.md you mention the pre-train phase
meta-train phase and meta-test phase
Is the pre-training phase equivalent to DNN training on large-scale data?
meta-train phase = Meta-transfer learning?
meta-test phase = meta-test?
# Calculate the confidence interval, update the logs
m, pm=compute_confidence_interval(test_acc_record)
print('Val Best Epoch {}, Acc {:.4f}, Test Acc {:.4f}'.format(trlog['max_acc_epoch'], trlog['max_acc'], ave_acc.item()))
print('Test Acc {:.4f} + {:.4f}'.format(m, pm))
Does the above code correspond to “classifier fine-tuning” in the (c)meta-test phase?
Line 258 in this code sets the model to eval mode and does not fine-tune the base-learner
Question 3:
Data set in the code is divided into three parts: training set, validation set and test set.
Are the train samples in (a), the meta-batches in (b) and the train samples in (c) sampled from the training set?
Are the test samples in (c) sampled from the test set?
The text was updated successfully, but these errors were encountered:
Answer to Q3: For example, on miniImageNet, the model is trained on the meta-train set, including pre-training and meta-transfer learning. Then, the model is evaluated on the meta-test set. The validation set is never used. This is to align with other related work, e.g., MAML and ProtoNets.
If you have any further questions, please feel free to contact me.
@yaoyao-liu Thanks for your reply. I'm still confused about some things.
Q1.In the phase of classifier fine-tuning, is base learner fine-tuned with test set?
Q2.In the phase of classifier fine-tuning, I locate this function, but I still don't know how it update the parameters of the base learner.
Are the parameters optimized according to loss in this code?
I appreciate your contribution, and I've been studying this paper recently, but have a few questions
Question 1:
In the paper,pipeline of proposed few-shot learning method, including three phases:
(a) DNN training on large-scale data
(b) Meta-transfer learning
(c) meta-test
In README.md you mention the pre-train phase
meta-train phase and meta-test phase
Is the pre-training phase equivalent to DNN training on large-scale data?
meta-train phase = Meta-transfer learning?
meta-test phase = meta-test?
Question 2:
meta-transfer-learning/pytorch/trainer/meta.py
Lines 239 to 294 in 835b6bb
Does the above code correspond to “classifier fine-tuning” in the (c)meta-test phase?
Line 258 in this code sets the model to eval mode and does not fine-tune the base-learner
Question 3:
Data set in the code is divided into three parts: training set, validation set and test set.
Are the train samples in (a), the meta-batches in (b) and the train samples in (c) sampled from the training set?
Are the test samples in (c) sampled from the test set?
The text was updated successfully, but these errors were encountered: