-
Notifications
You must be signed in to change notification settings - Fork 21
New Workflow: LDA then XGBoost #155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
New workflow which first run LDA and then run XGBoost using the LDA results as the main score. This helps prevent overfitting with XGBoost, results pretty comparable to XGBoost
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the addition! I added some comments/suggestions. I am wondering/concerned how the multi learner does with over-fitting, because it seems to basically perform the learning and scoring twice on the same data for ss num iters and xval num iters. Can you add an example output of running LDA, XGBoost vs LDA_XGBoost, and show the score distributions and pp plots, if not too much work.
I was also thinking we should make PyProphetMultiLearner abstract, so that we can open it up to different kinds of combinations for multi sequence learners.
pyprophet/scoring/runner.py
Outdated
# remove columns that are not needed for LDA | ||
table_lda = self.table.drop(columns=["var_precursor_charge", "var_product_charge", "var_transition_count"], errors='ignore') | ||
|
||
(result_lda, scorer_lda, weights_lda) = PyProphet(config_lda).learn_and_apply(table_lda) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this run the full learning and scoring on the ss num iters and xval num iters, and then do a second pass with XGBoost with the same data for ss num iters and xval num iters? I am wondering if this results in any over fitting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it overfits however, it might be unnecessary to do that many iterations.
Doing eFDR / FDR identification curves with different workflows we can se that LDA_XGBoost is quite similar to XGBoost in terms of overfitting and actually overfits slightly less than XGBoost but all results look reasonable. ![]() Here are PyProphet reports for different classifiers for diaPASEF single injection with an experimental library. |
New workflow which first runs LDA and then runs XGBoost using the LDA results as the main score. This helps prevent pi0 errors that run into with XGBoost.
Overall, the results seem quite comparable to just running XGBoost on my dataset.