Increasing the Transparency of Variant Effect Prediction #69
Labels
2024
Topics proposed for the 2024 hackathon in St. Louis
curation
Morning Session
selected-for-action
Tasks that were selected at the event and have followup attached to the issue
unconference
Submitter Name
Kyle Moad/Rachel Karchin
Submitter Affiliation
Johns Hopkins
Submitter Github Handle
kmoad/RachelKarchin
Additional Submitter Details
We are the PI and an engineer from the team behind OpenCRAVAT, a meta-annotation software framework, for variant interpretation. OpenCRAVAT provides predictions from over 30 variant effect prediction tools. We want to increase the utility and transparency of these predictions.
Project Details
We will discuss issues surrounding the growing integration of computational tools for variant effect prediction, into the diagnostic process. These include developing approaches to allow users to interpret and reason about predictions, to make sense of diverse predictions from multiple predictors, and to map prediction scores to the ACMG/AMP/VICC recommendations for classification of germline and somatic variants. Specific topics include: evidence double counting; the necessity for transparency in the features and logic underpinning predictions; understanding training data to prevent circular reasoning, and how to interpret results from increasingly popular 'black box' AI models. Aimed at clinicians, diagnostic personnel, and anyone interested in the future of genomic medicine, this workshop promises to provide valuable insights into improving the reliability of variant pathogenicity classifications and fostering the clinical application of predictive methods, ultimately advancing patient care in the realm of personalized medicine.
Required Knowledge
Familiarity with genetics/genomics and an interest in variant effect prediction.
The text was updated successfully, but these errors were encountered: