Do you feel comfortable with a Fitbit having access to your sleep schedules? With a computer using AI to interpret your mammogram or CT scan? Or deciding whether your biopsy contains cancer? As AI in healthcare becomes applied to a broader span of applications, a variety of questions arise regarding the ethical implications of incorporating deep learning models into healthcare. How do we properly balance the benefits of AI with transparency and patient privacy? How do we limit the incorporation of biases into AI models? Who is accountable when AI makes a mistake? During this discussion, we voiced our opinions on all of these questions as we explored the future and implications of a rapidly evolving medical landscape.
A curriculum is created for each discussion, which includes articles to distribute to the residents beforehand, a lesson to introduce the topic at the start of the discussion, and discussion questions to facilitate resident participation and engagement. Linked to this post is the complete discussion outline for our November 16th discussion on AI in healthcare at the North Hill Retirement Community. Below are excerpts from the document, including the list of articles distributed to the residents prior to the discussion and a list of some of the questions we discussed together.
Articles distributed in advance to residents:
Discussion questions:
After this discussion, have your opinions changed on artificial intelligence in healthcare?
How will the role of physicians change due to AI, for example 10 and 25 years from now? How will the patient journey change?
Whom will we allow to have control of (and access to) patient data?
Which applications of AI should require explicit patient consent?
Who is accountable when AI makes a mistake?
Refer to our "resources" page to find relevant books, videos, documentaries, and more related to this topic!
Full discussion outline document:
Articles distributed in advance to residents:
Comments