Replies: 5 comments 2 replies
-
Are you perhaps overfitting your data? What does your training set look like? |
Beta Was this translation helpful? Give feedback.
-
I met the same phenomenon when fine-tuning with my own dataset. One of the reasons I guess maybe the size of the fine-tuning dataset you use. |
Beta Was this translation helpful? Give feedback.
-
Is your training loss going down? Do you monitor your validation loss |
Beta Was this translation helpful? Give feedback.
-
I am seeing the same behavior while finetuning on zero shot datasets(ETTh and Ercot) keeping the config files same but only changing the prediction length according to the given dataset. I use the command : Fine-tune And then run the evaluate.py loading the model saved from the above fine-tuning: Below are the MASE for each dataset:
|
Beta Was this translation helpful? Give feedback.
-
Without a plot of the validation loss over iterations it is very difficult do understand if the fine tune is done with the correct hyper-parameters |
Beta Was this translation helpful? Give feedback.
-
finetuning own my dataset,but get worse result than zero-shot. Is this reasonable?
Beta Was this translation helpful? Give feedback.
All reactions