Answers should be about 1 paragraph long. Please don't use ChatGPT or any other LLM. We're interested in your thoughts, not someone (something) else's.
Research and discuss a recent example of a case study that highlights challenges in healthcare AI. Make sure to use at least 3 primary and/or secondary resources.
In following the Hippocratic Oath's dictum of First do no harm, how would this impact the types of models and systems an AI/ML expert would be willing to design and help implement?
Air Canada was recently found financially liable for its chatbot. Do you think that this legal decision will impact a hospital's ability to roll out a patient-facing chatbot? Explain why or why not, and the sorts of models that are now more or less likely to be developed.
Would you expect ML systems which predict relatively rare conditions, or try to capture false negatives (missed diagnoses), to increase, decrease, or have no impact on the total amount of money spent on healthcare?
Iatrogenesis (harm from medicine) can come in many forms (misdiagnosis, wrong dose administered, blood letting, etc). How would a wide-scale adoption of AI-based medical decision change iatrogenesis qualitatively (i.e. ignore whether the level would go up or down)?
Why do some experts in ML healthcare prefer to use the word "integrated" rather than "deployed" for a new ML model in a healthcare setting? Similarly, why "augmented intelligence" rather than "artificial intelligence"?
Criteria | Description |
---|---|
Content | - Relevance and depth of answer. |
- Demonstration of understanding of challenges in healthcare AI. | |
- Use of at least 3 primary and/or secondary resources (Q1 only). | |
Clarity and Organization | - Coherent structure and logical flow of ideas. |
- Clear articulation of thoughts and concepts. |