Leading the CHARGE: AI Leadership Insights Series #2
- CHARGE
- Jan 27
- 14 min read
Featuring Mitch Kwiatkowski, Healthcare data & AI leader
As artificial intelligence continues to transform healthcare, leaders face mounting challenges in ensuring responsible and effective AI adoption. In this second edition of the AI Leadership Insights Series, Mitch Kwiatkowski shares his expert perspective on the critical aspects of AI governance in health systems. With extensive experience in data and AI leadership roles, Mitch has spearheaded initiatives across major health systems and health plans, including his work as Chief Data and Analytics Officer at Marshfield Clinic Health System and Vice President of Data Operations at Highmark Health.
In this interview, Mitch highlights the importance of strong governance frameworks built on effective people and processes. He emphasizes collaboration, education, and leadership while offering practical steps health systems can implement today to establish effective AI governance programs that build trust, accountability, and meaningful impact.

Q: To start, could you share a bit about your professional background and what led you to develop expertise in AI governance within healthcare?
A: I have had the privilege of working in a variety of healthcare settings for over two decades. The first half of my career was very focused on technology, data, and informatics in the ambulatory space. I worked closely with providers, staff, and patients to implement and support EHRs, patient portals, and population health platforms. The second half of my healthcare career was focused on data, analytics, and AI at payer and provider organizations, often part of the same integrated delivery system. Some of my bigger accomplishments are related to strategy, architecture, operating models, data product delivery, AI/ML product development, interoperability, and information governance.
Early in my healthcare career, I was fortunate to directly observe the impact of technology and data on patient care. That’s not something a lot of people in my field ever get to see. The experience is always guiding my approach to find ways in which data and AI can be used to improve outcomes, cost and experience for our patients and communities. And I’m always considerate of how patients will be impacted by our actions.
My passion for AI governance in healthcare was a natural extension of years working in information governance, risk management, and regulatory/compliance. The organizations I have worked with were almost all innovative in how they approached using data, but the speed at which AI has become a competitive differentiator, whether real or perceived, is blinding and perhaps a bit dangerous. A few years ago, I started working on a governance program from scratch. As much as I would have liked to use something already out there, it just didn’t exist – at least not in our industry. Starting from the core patterns and principles of an agile data governance program, I followed the EU’s AI Act closely and incorporated elements created by institutions like the International Association of Privacy Professionals (IAPP) and governance gurus like Sunil Soares. Within a few months, I had the basic framework that we implemented at Marshfield Clinic. Since then, my approach has evolved and adapted to include new legislation, guidance provided by global governance organizations, and the incredible expertise of my peers. As changes are made, it is important to maintain a balance that promotes “responsible AI” while avoiding a bureaucratic morass.
Q: Among AI risks in healthcare—validity, fairness, security, and others—which do you find most challenging, and how can health systems address it effectively?
A: All these risks present unique challenges, but one of the most difficult to address is measuring and tracking the bias, fairness, and ethics of an AI system. Awareness and education are critical first steps, and there are many excellent organizations and resources focused on these areas. However, implementation and measurement still seem to be in their early stages of maturity. Internally developed AI is relatively easier to measure because teams know exactly what data went in and how the system works. There’s complete transparency. In contrast, with vendor-provided AI solutions, that transparency is often lacking. These solutions are usually black boxes, and you have to rely on the developer's processes to test and mitigate potential bias or unfairness. In AI governance conversations, I've found that many vendors don’t fully understand what this entails, and most are reluctant to share information about their training data or testing processes. Moving forward with trust alone is not sufficient.
Another significant challenge is privacy and security. I don’t believe the general public fully understands the data organizations collect and how their data is shared and used. This isn’t to say that health systems are acting maliciously, but I think many are somewhat careless—sometimes even reckless—with their collection, sharing, and use of patient data. For example, vendors often use healthcare organizations' data for ongoing training, usually without patients’ knowledge or the opportunity to opt out. One of the biggest risks of AI is the propagation and use of personal data without informed consent and without any accountability if something goes wrong. Who is accountable if the data is used inappropriately?
Q: What unique risks do you see with generative AI models?
A: The data and methods used for predictive models always had some level of risk, but generative AI has made governance more challenging due to its broad accessibility. Anyone can obtain a login, upload data, and generate results. Unfortunately, most people don’t understand how models work or how to interpret the validity of results. Thus, when the average person receives a response from ChatGPT, Claude, or their foundational model of choice, they may take it at face value without questioning its accuracy.
Another significant factor is data handling. We can now leverage unstructured data in ways that were previously too costly and out of reach. Large volumes of text, images, audio, and video can be uploaded and analyzed by models in seconds or minutes. A physician can easily upload a medical chart and ask ChatGPT for a summary. Patients might choose to upload their own medical information to understand, in layman’s terms, what it all means. This data is at risk. Many publicly available models save the information you upload, the prompts you submit, and the results generated. The lack of transparency in how these tools use data should make users proceed with caution.
In the rush to implement AI, healthcare organizations face vendors eager to secure their business. Unfortunately, this rush can lead to sacrifices in accuracy, reliability, and fairness in favor of speed-to-market. Companies worry about protecting their “intellectual property” and often refrain from sharing information about how their models work, how they were tested for bias and fairness, and how they are continuously monitored for drift.
All these factors make governing AI challenging. However, the good news is that it’s not an impossible feat.
Q: What challenges do these risks pose for healthcare organizations implementing and deploying new AI solutions?
A: Aside from the aspects mentioned earlier, the challenges within a health system often stem from people and processes. Organizational change for risk-related programs is difficult because it’s often about anticipating potential risks – risks that may never be realized. Companies don’t want to fall behind the competition, so there’s considerable pressure to deliver on the promises of AI. However, very few leaders have been able to articulate a value proposition for it. In other words, there’s often no clear link between the solution and a real business problem that requires AI. This is an age-old problem with data and technology that has now extended into the AI space.
Governance is often viewed as a barrier or a way to say “no.” In reality, it can be one of the best mitigation tactics an organization has. Of course, it must be done properly, ensuring it is agile and flexible. Communication, collaboration, and transparency must be foundational. If the business doesn’t understand what’s happening and isn’t involved, they’ll find ways to circumvent governance. If that happens, what’s the point of having a governance program at all?
Leadership in governance is crucial for success or failure. The person leading the governance initiative must be an excellent communicator adept at bridging business, clinical, data, and technical teams. They need to understand business drivers related to AI, the AI technology landscape, the clinical impacts, and the value the governance program brings.
Q: What steps do you recommend for health systems to assess and manage their AI portfolio?
A: A full inventory should be one of the goals of an AI governance program. Initially, it might be easier to establish processes that capture and document new AI systems. All new AI requests should be required to follow a set process for intake, review, and approval. Vendors should provide a base set of technical and non-technical information that will be stored in the inventory once it receives approval. After establishing the mechanism for inventory and the standards for documentation, teams take a retrospective look and inventory existing systems. This usually involves a combination of vendor-provided documentation and internal records.
Some vendors offer inventory products, but entry and management is typically a manual process. The ease of collecting information on internal AI systems depends on how many teams build AI solutions, how well they document their work, and the level of oversight from the governance team. For vendors, their documentation is a good starting point, reconciled with contracts and sales orders to identify purchased systems. The level of detail the inventory collects will determine its usefulness in ensuring proper oversight.
Oversight should rest with the AI governance leader and the governing council that facilitates the governance program.
Q: How should accountability be balanced between vendors and health systems, and what steps can systems take to hold their vendors accountable?
A: One way health systems can hold third-party vendors accountable is to include specific language in contracts. Organizations often push for clauses that require vendors to provide information about how AI solutions work without exposing intellectual property, and they also add language to address potential risks of harm from the AI product.
To ensure a solid balance between accountability of developers and deployers, developers need to be transparent about certain aspects of their AI so that it is not a black box. Once they understand how the AI product works, deployers must consider the data used and how the product will be used with patients. Regarding bias and fairness, models may need to be trained on a population that represents the community the health system serves, or different models could be deployed for specific populations.
Health systems and health plans need to ask vendors questions, educate themselves on how the AI product works, and consider both direct and indirect ways patients could be adversely affected. This awareness must be sustained after implementation. Over time, models can drift, decreasing the effectiveness of an AI product. This might result in incorrect diagnoses, missed treatment recommendations, extended treatment times, or denial of treatment by insurance companies. Developers should be diligent in testing their own product for accuracy and reliability while deployers should periodically check to ensure the product is working as intended with no unintended consequences affecting patients.
Q: Who should sit on an AI governance council, and how can diverse perspectives improve its effectiveness?
A: The council needs to be diverse in terms of roles, levels, and perspectives. Ideally, the group should have fewer than ten members, but this can be challenging in an integrated delivery system where fair representation from payers, providers, and other business areas is necessary. Areas that should be represented include clinical, legal, compliance/privacy, finance, data and analytics, and IT. For health systems, the council should include at least one physician and one nurse. Other important areas to consider are business units like strategy, business development, patient services, and informatics. The council should have representatives from different levels, meaning it should not be limited to just executives. Including a few VPs is fine, but also consider directors, managers, and individual contributors.
The council members should stay educated on AI, including the latest trends and how AI is used in healthcare. They don’t need to be statisticians, but they should be able to understand the key features of an AI product and ask the right questions. Additionally, they should not shy away from asking difficult questions. For instance, the use case for an AI product might be legal, but an organization may still need to assess whether it is something it should be doing. Consider how patients would react.
Health systems should also consider including patients or patient advocates as part of the governance process. Not necessarily to review and approve requests but to get an outside perspective on key AI governance topics. Health systems that have research arms often include outside perspectives in IRB processes. Similar methods can be used here to share enough information without exposing confidential business information. Sometimes, a patient or outside participant will provide an opinion or idea that the core governance council missed.
Q: With many health systems appointing Chief AI Officers, who do you think is best suited to oversee and govern AI across the organization?
A: First, it must be someone who is passionate about using AI responsibly. It should be a leader who can influence others. This doesn’t necessarily mean an executive, but in many organizations with strong hierarchies, an executive may be the only level capable of implementing such a program.
The program leader should be a strong communicator who can bridge the gap between business, clinical, and technical areas. Throughout the program's life, they’ll need to maintain strong relationships with peers and resolve conflicts as needed. They must understand AI and be able to educate the organization on how AI works. Additionally, they should be capable of removing barriers and resolving any issues that arise from AI systems. While these tasks don’t need to be handled directly by the leader, they must be able to orchestrate the right resources at the right time to ensure smooth operations.
In many organizations, this role has been filled by the Chief Data and Analytics Officer or the Chief Analytics Officer. It could also be someone in Compliance, Privacy, or Legal. Another suitable area for a leader is within clinical areas, such as the Chief Medical Officer, Chief Nursing Officer, a Medical Director, or an ambitious physician or nurse.
Q: How can health systems track AI tools post-deployment to ensure ongoing value?
A: Unfortunately, this is a gap for most data and technology projects. Someone gets funding, a solution is implemented, and things continue to run without anyone articulating value and return on the investment. I would argue that all data and technology projects should go through this scrutiny.
Value is about outcomes that can be directly linked to business drivers and goals. Too many data and AI teams get hung up on action and volume – how much work is being done or how many solutions are deployed. If the solution isn’t being used or doesn’t have any kind of material impact on the business, it has no value to the organization and by extension, its patients.
Solutions should always be linked to clinical outcomes, financial performance, operational efficiency, or improvement of the healthcare experience. This means leaders should work with finance, clinical, and operational areas to understand and measure the impact.
Because AI can drift and result in unintended consequences, value can diminish over time. The governance program should establish a process to monitor and “recertify” AI solutions at regular intervals. That might be every six months for high risk solutions and every 12 or 18 months for medium to low risk. It’s up to the leader of the governance program and the council to review these initiatives. They should compare the effectiveness of the solution to any baseline measurements taken at implementation. For some AI solutions, it’s helpful to go out into the field, shadow people using the technology, and observe any noticeable impacts. For example, AI embedded into an outpatient EHR might have nominal impact on metrics, but firsthand observation may reveal a negative impact on patients’ experience during an office visit.
Q: How can smaller health systems build effective AI governance programs with limited resources?
A: Governance doesn’t require a lot of resources. I’ve built governance programs in a few multi-billion-dollar health systems, and no matter the size, I prefer leaner programs that start out with one to three people. Smaller hospitals can start with one or two people who can carve out enough time. An executive “champion” like a CIO, CFO, or CMO can help promote the program. If an organization is investing in AI, it has to find a way to invest some time in a supporting governance program. If they don’t have someone internal who can help, there are consultants and vendors who can come in and provide a framework, get the program started, and even help support it for a period of time.
Q: What risks do health systems face with weak AI governance programs?
A: The most obvious consequence is that AI products may be deployed for patients, providers, and employees without a thorough understanding of their impact. Underserved populations might not see the benefits of AI, and patients could be unintentionally harmed. Provider and employee satisfaction could decline, financial performance could drop, and the risk of a data breach might increase.
Governance can be difficult to sustain because it requires constant orchestration of people in a matrixed structure, which isn't always compatible with the day-to-day operations of the health system. Company politics and personalities can also interfere. Underperforming or ineffective governance programs will erode trust and influence very quickly. It’s like a garden that gets ignored for too long; once the weeds start to take over, the plants you want to grow will be choked out and may never recover to produce the desired results. If a governance program falters, it can be very difficult—often impossible—to regain momentum, which is why investing in a dedicated leader is so important.
In the absence of regulation, healthcare organizations have the responsibility to do what is right to protect their patients and maintain their trust. Transparency and communication are vital. Every time data is leaked or an AI system makes a mistake, the brand and reputation of the system are at risk. More importantly, in the business of healthcare, lives can literally be impacted by inappropriate use of technology.
Q: How is regulation shaping AI governance, and what role should health systems play in compliance?
A: It will be interesting to see what changes take place with the new White House administration coming in and control of Congress shifting. As other experts have pointed out, regulations are needed to address development and use of AI, but early signs indicate a more hands-off approach and a possible reversal of existing regulations for fear that they will stifle innovation. I don’t believe no regulation is the answer. With the EU, UK, South Korea, Brazil, and India all moving ahead with their own AI legislation efforts to protect citizens, the current environment creates complexity for businesses who have global footprints. They define AI differently, have different approaches to regulating AI, have varying flexibility.
We’ll see where the balance between privacy and innovation eventually lands in the U.S., but that may take a year or two (if not longer). As with many things, we should start at the data level and address privacy, but the speed at which AI is moving may push other regulations and legislation (or lack of) out front.
Q: What in AI are you watching, and how do you see them shaping healthcare?
A: I’m always interested in the people and process side of data and AI. Privacy, governance, and operating models are often top of mind for me.
I'm starting to see more healthcare organizations adopt a data product mindset to support analytics and AI. These are packaged sets of data that address specific business use cases, are trusted and governed, and have standardized access points. Data products align with the FAIR principles – Findable, Accessible, Interoperable, and Reusable – and extend the capabilities of today’s modern architectures. This approach is gaining traction and driving demonstrable value for data and AI investments.
Related to this, I believe we’re going to see a strong shift towards data quality. As 2024 drew to a close, more organizations realized that putting bad data into an AI solution yields unreliable results. We see this issue almost daily when ChatGPT, Claude, or Gemini deliver hallucinatory responses to prompts. Health systems need to take stock of their structured and unstructured data, understand what they have, measure the quality, and take action to improve it. The quality of an AI product is only as good as the data it uses.
The legal landscape around copyrights will be interesting. AI companies are starting to reach agreements with media publishers on the use of licensed data for training. However, it’s unclear what the media creators will get from these deals. We may see more legal action from artists whose works have been used for training without permission. On the flip side, questions remain about the rights afforded to AI-generated material and whether individuals can protect AI-created works as their own.
Ethics continues to be a hot topic in the AI space, with several experts driving open discussions. Bias and fairness will remain a focus, and AI governance programs will help push conversations about whether an AI use case is “the right thing to do.”
Lastly, there is a significant gap in general AI education for the public. Many people don’t understand AI, aren’t aware of how their data might be collected and used, and don’t know how they can be affected. It’s not unusual to find someone who doesn’t care about what data their healthcare provider or insurance company has or how it is used. They also don’t know what data is collected by the apps they use. Most importantly, they lack access to resources that can help them protect themselves. I expect we’ll see more AI incidents this year and a growth in practical education for the public.
Q: What’s your key advice for healthcare leaders starting an AI governance program?
A: Be intentional and disciplined. Healthcare is about taking care of patients and improving the health of our communities. We’re just starting to see the potential of AI to help, but we’re also learning about the harms of an incorrectly trained AI system or AI acting without a human in the loop. AI governance can serve as a safeguard against these risks. While it doesn’t take much to start a program, it requires dedication and discipline to keep it running effectively. It’s rarely something that can be managed “off the side of their desk” without top-down leadership support. Determine how this aligns with the organization’s mission and vision, find your “North Star,” and stay on course despite any obstacles you encounter.