The Yale certificate program in Medical software and Medical AI
INtroductory video about the program
In this video, Prof. Papademetris describes the certificate program.
Selections from the lecture videos
This video consists of excerpts from the 20+ hours of lecture videos recorded for this program.
This is a new program launching in January 2024. As stated in the official webpage: "Our non-degree program builds on the foundation of the recently published textbook “Introduction to Medical Software: Foundations for Digital Health, Devices, and Diagnostics” and the popular companion Yale Coursera Course “Introduction to Medical Software,” The class has already enrolled over 19,000 students from around the world. The program will be taught by a team of experienced faculty from the Section of Biomedical Informatics and Data Science at the Yale School of Medicine with expertise in AI, data science, clinical decision support, and medical software. The program will be led by Professor Papademetris, who was the lead instructor of the Coursera class. "
The official webpage for the certtificate program can be found at https://online.yale.edu/medical-software-ai-program. You can find the full list of segments in this pdf document.
There are other courses in the general area of “AI and Healthcare.” Why is there a need for another program? The fundamental insight underlying our program (as I discuss in the short video linked to this post) is that, as Kicky van Leeuwen commented, “AI only works in clinical practice when the model is turned into a piece of software.” Or to put it another way, AI only becomes a useful clinical tool when integrated into a larger system. The analogy I often give to my students is that for many medical applications, AI plays the same role as the engine plays in a car. It is a core component of the car, without which the car cannot function. But critically, users drive the car not the engine and there are many other components of the car (seats, steering wheel, tires, windshield wipers …) without which the car does not operate, and which directly affect the users’ experience and safety when driving the car. To put it even more directly, one never sees engines moving down the street, what you see out there is cars. Much of the discussion around AI in medicine misses this critical point. Many of the problems with AI (errors, bias, hallucinations, …) are not necessarily going to be solved by improving the AI. It is more likely that these will be addressed at the car level (the software level), by adding safeguards and fallback mechanisms outside the AI.
Finally, thought the program we focus on not only for what can go right, but for what can go wrong. Paraphrasing one of the comments from of one of guest experts, Megan Graham, we need to think both of the “happy path”, which is how to enable “good things”, useful functionality that makes the life of our users easier, and the “sad path”, or how to prevent “bad things,” that can harm a user or a patient. When we look at AI in Medicine, the focus inevitably is on how great the potential applications are. We often forget that a system that is 99% successful will fail 1% of the time. How to ensure that those 1% of the situations do not result in harm to a patient is a critical part of the design, implementation and testing of medical software.
The program consists of 20+ hours of recorded lectures, supplemented by a set of guest expert interviews (all available on YouTube: https://lnkd.in/e7bHsX2X.)