Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What are the effects of overfitting for downstream tasks? #38

Open
SnoozingSimian opened this issue Jul 16, 2023 · 0 comments
Open

What are the effects of overfitting for downstream tasks? #38

SnoozingSimian opened this issue Jul 16, 2023 · 0 comments

Comments

@SnoozingSimian
Copy link

I was trying to adapt the sentence-transformers/multi-qa-mpnet-base-dot-v1 model to the financial domain using SEC data using GPL.

I trained the model with the following hyperparams:

{
"learning_rate": 0.00002,
"num_examples_eval": 1000, 
"num_examples_train" : 20000,
"num_epochs": 15
}

My loss curves were as follows:
Train Loss Curve
Validation Loss Curve

Seems like the model itself is overfitting, but the performance of the trained model is not up to the mark even if I had used early stopping. I trained one for 3 epochs and the unadapted models perform better than the trained ones. And I was wondering if I could have some insights on why this is, I don't really know where to ask this question. If there is some other place where this question is suitable, please let me know and I will take it there, Especially because this is more of a theoretical question than something tied to this library.

I am relatively new to training models, so please let me know if I am making any obvious mistakes here (or if any other information is required).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant