-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation Implementation #170
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few initial thoughts.
The first pass works for running all datasets over all gold standard data. Next pass will allow for a user to set which datasets to run over gold standard |
I need to make a third pass at the code to allow for evaluation code to truly be optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm leaving a few initial comments but skipped evaluation.py
for now. I'm still looking for the best source of EGFR evaluation data we can use for a test case on pathways created with the egfr.yml
config file.
Ready for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initially I was going to have an EGFR-based test case be part of this initial pull request. Now, I think we should work to merge it soon and add that test case in a follow up pull request to test the actual precision calcuation.
Co-authored-by: Anthony Gitter <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed a couple final changes. This is ready to merge once the tests pass.
No description provided.