“We are entering an age of human AI collaboration.” The ultimate goal is to be patient-centered, improving the patient experience and outcomes. To achieve this, Dr. Girish Nadkarni discusses what it takes to use AI responsibly: strong governance, continuous monitoring, and a commitment to equity. With AI, we can ensure every patient receives safe, effective, and unbiased care.
I think we are the first takeaway is that we're entering an age of human AI collaboration, where we treat AI as a member of the care team, but we figure out the best way and the best scenario in which to use it, obviously, with human oversight, to sort of make the whole care team better with the ultimate goal of being patient centered and improving both the experience, but also the outcomes for the patient. I think we need to use AI in a safe, effective, ethical. And responsible manner. And for doing that, governance, right, knowing what fails, when, and how to prevent that failure, assurance to make sure that any system that touches patients is safe, effective, ethical, and responsible, and monitoring to make sure that, you know, as AI sort of gets deployed and we monitor it over time to prevent anything going on the rails. You know, the AI system performs differently just based upon, you know, your gender. Or your race or your age, right? And I think that is something that we want to address before we deploy AI systems, but also monitor longitudinally over time to make sure that there's no bias. And we also want to make sure that, uh, if we find bias, we address it in the most sort of responsible and ethical way possible to ensure that we're delivering the same level of experience and care to all of our patients, right, regardless of their background.