top of page

Leading the CHARGE: AI Leadership Insights Series

Featuring Dr. Deepti Pandita, Chief Medical Information Officer at UCI Health


As artificial intelligence continues to reshape healthcare, leaders must navigate complex challenges to ensure safe and effective AI integration. In this inaugural edition of the AI Leadership Insights Series, Dr. Deepti Pandita shares her expert perspective on the evolving landscape of AI governance in health systems. From managing bias and safeguarding data security to aligning with evolving regulations, Dr. Pandita highlights the critical factors healthcare organizations must address to implement responsible and impactful AI solutions.




Q: Tell us about your role as Chief Medical Information Officer at UCI Health.

A: As CMIO of a large Academic University health system that includes 6 hospitals and multiple ambulatory sites, I am responsible for enabling, implementing, managing of IT systems including AI-driven systems that interface with end users while ensuring these systems are user centric, add value and drive efficiency and proficiency. Additionally, I am a teaching faculty for the Clinical Informatics Fellowship program at UCI.


Q: It’s no secret there’s a lot of hype around AI adoption across healthcare. From your perspective, what are the most significant risks health systems face when implementing AI products?

A: AI systems must be rigorously validated to ensure they provide accurate and reliable results. Errors in AI predictions or diagnoses can lead to incorrect treatments, potentially harming patients. In my view Ensuring AI systems are accurate and valid is the number one priority.


Q: There’s been significant discussion around different types of risks when it comes to AI in healthcare: accuracy and validity, bias and discrimination, and privacy and security. How do you perceive these risks and which do you think pose the greatest challenges for health systems today?

A: It is hard to pin down as one risk being paramount. Ensuring AI systems are accurate and valid is fundamental. Errors can have severe consequences. Addressing bias in AI models is critical to ensure fair and equitable healthcare for all patients. This is a significant challenge due to the complexity of healthcare data and the potential for ingrained biases. Of course, Privacy and security are important because data breaches undermine trust in the AI systems and can also have legal and financial implications. If I had to pick one, I would say bias and discrimination pose some of the greatest challenges. Ensuring AI models are fair and unbiased requires continuous monitoring, diverse data sets, and robust validation processes.


Q: How does governing generative AI differ from predictive ML models? What new challenges does it introduce?

A: ML governance has been in place for some time now in the industry and revolves mostly around model accuracy, validity, and fairness, as well as managing data privacy and security. In contrast Gen AI Governance has the above components plus verifying content authenticity, ethical usage and prevention of misuse which are newer concepts.


Q: What do you see as the biggest challenges health systems face in developing and operationalizing AI governance programs?

A: The biggest challenge is the complexity and rapid evolution of AI technologies hence the Governance framework needs to pivot frequently and remain agile. Another challenge is Lack of Expertise with health systems lack the internal expertise needed to evaluate and govern AI tools effectively. For clinical tools often integration into EMR tools is challenging. Then there is the navigation of the complex and everchanging regulatory landscape with evolving laws and standards at a federal and state level.


Q: AI tools are increasingly embedded in legacy systems (EMRs, RCM platforms, etc.). How can health systems gain visibility into their full AI ecosystem and ensure proper oversight?

A: In my view there are a few ways to ensure this: Comprehensive Inventory of all AI tools in use, including their purposes, data sources, and performance metrics. Regular Audits to assess the performance, accuracy, and compliance of AI tools. Establish a centralized AI governance committee to oversee the deployment and use of AI tools across the organization.


Q: How do you think health systems can hold third-party vendors accountable to ensure the safety, accuracy, and fairness of their AI products?

A: It all starts with ensuring contracts with vendors include clear terms regarding the safety, accuracy, and fairness of their AI products, and then performing  regular assessments of third-party AI tools to ensure they meet the required standards.


Q: Who do you believe should be involved in an AI governance council or committee, and what should their key responsibilities include?

A: Include representatives from clinical, IT, legal, compliance, and data science teams along with representation from Operational leaders or their representatives .AI tools will fail unless there is buy in from the business operations, so they are a key stakeholder in the governance.


Q: As systems try to assign the right leader to oversee AI governance across the organization, what’s your approach? Who do you think is the most suitable role for this accountability?

A: Given their expertise in both clinical medicine and informatics, CMIOs are well-suited to oversee AI governance. The CMIO should work closely with other leaders, such as the Chief Information Officer (CIO) and Chief Data Officer (CDO), to ensure comprehensive oversight and engage key operational leaders such as the CMO, CNO and COO.


Q: How should health systems monitor AI tools over time to ensure they’re delivering value?

A: Organizations need to Implement continuous monitoring systems to track AI tool performance and outcomes in real-world settings especially ensuring the tool is using their data representing their unique patient populations. Establish feedback loops with clinicians and patients to gather insights and make necessary adjustments. 


Q: Smaller, lower-resourced hospitals often struggle to implement robust governance. What advice would you give them to avoid falling behind?

A: This is a real issue, and I recommend these organizations partner with larger health systems or academic institutions to share resources and expertise. Additionally, they should Prioritize AI implementations in areas where they can have the most significant impact, they can start with small use cases and grow the AI footprint as their skills and experience grows.


Q: What are the potential consequences for health systems if AI governance programs underperform or are insufficient?

A: The biggest and most worrisome consequence is patient harm due to incorrect diagnosis or treatments. Another consequence is noncompliance with regulations leading to legal penalties and or loss of accreditation. If the AI system does not perform as intended and has no oversight, there is risk of losing trust with Physicians and patients. Finally, all of these can have financial consequences. Poorly governed AI systems can cause disruptions in healthcare operations, affecting overall efficiency and patient care.


Q: What do you think is the role of OCR1557 and other federal and state-level regulations in shaping AI governance practices?

A: OCR1557 requires all health systems have AI Governance set up which should be the norm even without this regulation for all the reasons stated above.


Q: What trends or upcoming regulations around AI governance are you watching most closely?

A: Two major trends are emerging for 2025. The first is agentic AI and I think we will see a mushrooming of vendors in this space. Secondly there is growing evidence for the desire to harmonize AI regulations globally.


Q: If you had one key takeaway for healthcare leaders about building effective AI governance programs, what would it be?

A: Effective AI governance is not a one-time effort but an ongoing process. Healthcare leaders should establish a culture of continuous monitoring, evaluation, and improvement of AI tools. Conduct regular audits to assess the performance, accuracy, and fairness of AI tools. Involving clinicians, patients, and other stakeholders in the governance process to gather diverse perspectives. Be adaptable to evolving regulations and technological advancements, ensuring that governance frameworks remain relevant and effective. And above all maintain transparency in AI operations and decision-making processes to build trust among patients and healthcare providers.

Logo_primary.png
  • LinkedIn
  • X

© Copyright 2024

 Center for Health AI Regulation, Governance & Ethics (CHARGE)

All Rights Reserved

bottom of page