Skip to content

New Workflow: LDA then XGBoost #155

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

jcharkow
Copy link
Contributor

@jcharkow jcharkow commented Aug 7, 2025

New workflow which first runs LDA and then runs XGBoost using the LDA results as the main score. This helps prevent pi0 errors that run into with XGBoost.

Overall, the results seem quite comparable to just running XGBoost on my dataset.

jcharkow added 3 commits July 30, 2025 17:50
New workflow which first run LDA and then run XGBoost using the LDA
results as the main score. This helps prevent overfitting with XGBoost,
results pretty comparable to XGBoost
Copy link
Contributor

@singjc singjc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the addition! I added some comments/suggestions. I am wondering/concerned how the multi learner does with over-fitting, because it seems to basically perform the learning and scoring twice on the same data for ss num iters and xval num iters. Can you add an example output of running LDA, XGBoost vs LDA_XGBoost, and show the score distributions and pp plots, if not too much work.

I was also thinking we should make PyProphetMultiLearner abstract, so that we can open it up to different kinds of combinations for multi sequence learners.

# remove columns that are not needed for LDA
table_lda = self.table.drop(columns=["var_precursor_charge", "var_product_charge", "var_transition_count"], errors='ignore')

(result_lda, scorer_lda, weights_lda) = PyProphet(config_lda).learn_and_apply(table_lda)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this run the full learning and scoring on the ss num iters and xval num iters, and then do a second pass with XGBoost with the same data for ss num iters and xval num iters? I am wondering if this results in any over fitting?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it overfits however, it might be unnecessary to do that many iterations.

@jcharkow
Copy link
Contributor Author

Doing eFDR / FDR identification curves with different workflows we can se that LDA_XGBoost is quite similar to XGBoost in terms of overfitting and actually overfits slightly less than XGBoost but all results look reasonable.

image

Here are PyProphet reports for different classifiers for diaPASEF single injection with an experimental library.
pyprophet_reports.zip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants