Health Hats, the Podcast

Health Hats, the Podcast


Healthcare AI for Humans: Governance, Research, and Rights

March 09, 2025

Data scientist Emily Hadley on navigating AI in healthcare, offering practical advice for maintaining patient agency amid algorithmic decision-making. Summary This interview with data scientist Emily Hadley examines the intersection of artificial intelligence and healthcare through a deeply personal lens. Hadley's journey began when her own health diagnosis coincided with her graduate studies in analytics, revealing how algorithm-driven systems often affect patient care—especially through insurance claim denials and clinical documentation. The conversation offers practical guidance for patients navigating AI-influenced healthcare, including reviewing AI-generated clinical notes for accuracy, challenging algorithmic insurance decisions, and insisting on human intervention when automated systems fail. Hadley advocates for preserving patient agency and rights within increasingly automated systems while highlighting how algorithm review boards are striving to provide governance in this largely unregulated space. The interview concludes with resources for staying informed about developments in healthcare AI, emphasizing that while AI tools are rapidly advancing, patient advocacy remains vital. Click here to view the printable newsletter with images. More readable than a transcript, which can also be found below. Contents Table of Contents Toggle EpisodeProemA Data Scientist AwakesBuilding Guardrails with AI GovernanceHallucinations and Validation with AI in ResearchPrompt Engineering-Conversational AIVerification and VigilanceStaying InformedReflectionRelated episodes from Health Hats Please comment and ask questions: at the comment section at the bottom of the show notes on LinkedIn  via email YouTube channel  DM on Instagram, TikTok to @healthhats Production Team You know who you are. I'm grateful. Podcast episode on YouTube No video Inspired by and Grateful to Eric Pinaud, Laura Marcia, Amy Price, Dave deBronkart, Links and references Prompt Engineering Algorithm Review Boards at RTI Dave deBronkart's Patient's Use AI Episode Proem This year, I switched from Medicare Advantage to Traditional Medicare. I still needed to purchase a supplemental commercial plan to cover what Medicare Part B didn’t. However, the supplemental commercial plan denied some services the previous Medicare Advantage plan covered. Why? What algorithms did each plan use to determine coverage? How can I manage this? Welcome to the third installment of Artificial Intelligence Can Work for You. We’ve explored how I use AI in my podcast productions and delved into some AI basics with Info-Tech leader Eric Pinaud. I asked Emily Hadley, a data scientist at RTI specializing in AI algorithms for insurance coverage decisions, to join us. Early in her graduate studies, Emily was diagnosed with Crohn’s disease. This led to her interest in studying insurance algorithms. A Data Scientist Awakes Health Hats: How did you gain expertise in AI? Emily Hadley: Great question. I was diagnosed right as I started a graduate program in analytics. In my undergraduate studies, I studied statistics in public policy. I liked the idea of using data to shape how policymakers make decisions, especially in the US. I had done some work with AmeriCorps and then went to grad school to really hone those skills. Being diagnosed at the same time that I was in grad school meant that I was navigating to new, informative, and educational areas. And I think that that's when I really came to realize the power of data and the power of AI in shaping the way that organizations and people make decisions. We live in a really algorithm-fueled society. We constantly encounter technology and AI systems, even when we don't realize it. An example I give is that I've faced many problems getting insurance to cover the things it is supposed to. I didn't realize until a couple of years ago that this is due to many insurers embracing algorithm-driven decision-making systems that often automatically deny coverage for services that should be included. Instead, they might say they don't cover it because the appropriate code was not included when billing. So, the provider claims, ' Oh, we don't cover that because the code was missing, ' even though it should have been included. I feel as though I've been a victim of some of these automated systems, which have significantly impacted my life and pushed me to understand that these AI systems are not hypothetical. We live with them every day, and we don't have a lot of insight into them as consumers or citizens. And that really pushed me into this responsible AI space of thinking. How do we develop and use algorithms that align with how people would treat each other? Not necessarily how algorithms and robots would treat each other. Health Hats: Are you saying that this is a way to be more transparent about what's in the algorithms? Emily Hadley: That's a piece of it. Building Guardrails with AI Governance Health Hats: In something you sent me to educate me more about what you're doing, you talked about algorithm review boards, and I was trying to picture them. Who's around the table? Can you tell us a little bit about what an algorithm is? Is it real? Is it theoretical? Emily Hadley: Yeah, I'll launch right into it. I've been passionate and interested in this since I saw more companies embrace AI, especially in the United States. States don't have laws to guide how companies, academic institutions, nonprofits, and government organizations use AI. Certainly, some legislation and rulemaking is probably coming, but in the absence of it. Organizations need to decide how they will manage AI from a risk perspective. This includes reputational risks to the organization, its customers, and the population at large. Also, from an equity and justice perspective, how can AI systems align with our organization's mission and values? One of the things that I started noticing at my own organization was that we have something called the Data Governance Committee, which existed before ChatGPT became a big thing and before everyone talked about AI. The data governance committee was focused on how to protect data on the projects that we work on. Many projects involve private health information or other personally identifiable information. We need to ensure that even before GPT, we didn't want to upload this information to the cloud or expose people's data in a way that was not permitted. This group has also adapted to become an AI review group. So, when someone at our organization wants to use AI in their projects, I recently wanted to use AI to help summarize some text responses that we were working on. Before I moved forward, I needed to check with the data governance committee to ensure that it aligned with RTI policies and that I was using the data in a protected and secure way. I assumed, and this research confirmed, that other organizations are doing the same thing. They are putting together groups of people, especially in the finance and health sectors. To your point, they don't all look the same. Every organization is doing what works for them. At my organization, the data governance committee includes our corporate council staff members, ethics officer, data privacy officer, and a couple of subject matter experts like myself, who bring a lot of different data or research pieces to the table. Finance organizations, especially banks, have had a long history of risk assessment committees for various credit scoring or lending algorithms. They're mostly adapting a group, sometimes adding some new AI expertise, but a lot of that expertise is already in-house. I would say the health groups have done some of the most interesting and innovative work in this space because this type of review is new for many of them. It's similar to some FDA-type review work they've done. Health Hats: Or IRB review. Hallucinations and Validation with AI in Research Emily Hadley: Exactly. As part of this research, we investigated whether IRBs could do this work. And what we heard was actually a resounding no, they did not consider. Health Hats: it's a different focus. I've been on an IRB, and there is this business of being a generalist, so there's value in having a generalist or two generalists in a group of many experts. Okay. So, what do you think the role of consumers is on review boards and algorithm review boards? Emily Hadley: I'm noticing a focus on affected communities, especially in the health sector. This includes patients and clinicians, particularly those engaged in the work. It's not an algorithm review board but for the long COVID research you mentioned. We have patient representatives involved in all of our manuscripts. I was just at a clinician review meeting last Friday, and it’s incredibly helpful to have someone provide insight when determining whether we prepared this methodology correctly. Are these initial results what you expected? Do you feel you have a say in this process and how it's being developed? I've also observed tech companies embrace that level of stakeholder involvement. It's more consumer driven. They want to create products that people will use. However, I am encouraged to see the participation of affected communities because I believe that's where many revelations occur. Health Hats: Let's take a step back. What kinds of AI are used in research? Emily Hadley: Yeah, that's a great question. In research, we see it using a couple of different areas. One of the biggest is information gathering, extraction, and summarization. We've been using it for literature reviews to help summarize or get key points out of particular papers. We've been excited that it allows people with different educational or literacy backgrounds to interpret papers.