Think about the first time you tried an AI tool on your own. Maybe it was ChatGPT, Claude, Gemini, or Grok. You typed something in, got a response, and started experimenting. Then maybe you got comfortable and started sharing more personal details about yourself, potentially asking medical questions, then maybe financial questions, at what point are you putting yourself at risk? Am I comfortable with this company training on what I'm sharing?
Whether you realized it or not, you were doing AI governance. You were evaluating a tool, assessing risk, and making decisions about what data you were willing to share and with whom. Most of us landed in different places on that spectrum, and that's fine. The point is that you went through a process of understanding what the tool could do, what it couldn't, and what you were comfortable with.
Now multiply that by every employee, every patient record, and every clinical and operational workflow in your organization. That's the scale of the decision in front of healthcare leaders today. And the stakes are considerably higher than your personal ChatGPT experiment.
The pressure to act and the decisions it creates
Every healthcare executive I talk to is feeling the same thing: board members asking about AI strategy, vendors flooding inboxes with demos, and internal teams wanting guidance on tools they're already using on their own. The pressure to "do something with AI" is real.
The challenge is that "AI" covers an enormous range of tools and capabilities. A predictive model using statistics to flag readmission risk is fundamentally different from a machine learning algorithm scoring and categorizing clinical documents, which is different from an LLM drafting clinical notes, which is different from an agentic AI system triaging service desk tickets and taking action on behalf of your team. Each carries different levels of risk, requires different validation, and solves different problems, but these days all might be sold as “AI”. Treating them as one category leads to poor purchasing decisions and misaligned expectations.
Startup, enterprise vendor, or build: honest tradeoffs
When you start evaluating AI solutions, you'll find yourself choosing between three paths, each with real tradeoffs.
Healthcare AI startups often have the most innovative and focused solutions. They move fast, they're hungry, and they tend to build tools that solve specific workflow problems well. The tradeoff is maturity: can they scale with you, will they be around in three years, and can they meet your security and compliance requirements today?
Enterprise vendors like Epic, Workday, and ServiceNow are shipping AI features into platforms you already own. That feels safer, and in some cases it is. But here's what I learned managing these platforms: enterprise vendor AI deserves the same scrutiny as any other product as it relates to the AI capabilities. These features are often built using another customer's data, trained in another customer's context, and designed for another customer's workflows. When they land in your environment, they may not perform as advertised. Validating that takes time and resources that organizations frequently underestimate. The significant value and benefit they already have is being an established vendor, passing your risk assessments, and having embedded support within your organization, which may be enough to justify selecting them in any case.
Building your own solutions is increasingly viable. Think about how organizations handled web development fifteen years ago: everyone outsourced it, and now most have internal teams. AI agents and models are following the same trajectory. Your team may have people with the interest and aptitude to build custom solutions at a fraction of vendor costs. This isn't the right path for every organization today, but it's worth planting the seed. The talent may already be in your building.
Your organization owns this: data governance and accountability
Before you purchase, pilot, or build anything, your organization needs a governance foundation. This is where I see the biggest gap. Leaders are evaluating tools without having answered the fundamental questions about how AI will be managed across the organization.
Two areas matter most at the start: data governance standards and clear roles and accountability.
Data governance means defining, before any AI tool touches your environment, what data it can access, where that data is stored, whether the vendor is using your data to train their models, and how PHI is protected throughout the process. This is where HIPAA intersects with AI procurement, and most organizations still have evolving policies and procedures. At minimum, you need a set of non-negotiable questions that must be answered before any AI solution is approved. What data does this tool ingest? Where is it processed and stored? Is our data used to improve the vendor's product? What happens to our data if we cancel the contract? If a vendor can't answer these clearly, that tells you something.
Roles and accountability means designating who owns AI decisions in your organization. Right now in most health systems, AI purchases and pilots happen across IT, clinical informatics, operations, and innovation teams with no central coordination. Departments are buying tools independently. Clinicians are experimenting with consumer AI products. And nobody has a complete picture of what's running, what data it's touching, or who approved it. Even a small step, like designating a single person or committee as the point of accountability for AI decisions, changes the quality and safety of what gets deployed. That person or group needs the authority to set standards, approve new tools, and require ongoing monitoring.
This is the organization's responsibility. You can't delegate governance to a vendor, and you can't leave it to individual departments to figure out on their own. Leadership has to set the standards, define the decision-making structure, and make it clear that AI adoption is a managed, organization-wide effort.
Education first, purchasing second
The most important AI investment you'll make isn't a product. It's the knowledge to evaluate one.
Your team needs to understand the basics: what types of AI solutions exist, what validation and testing looks like, what ongoing monitoring requires (concepts like model drift and population-specific training data), and how to evaluate vendor claims with informed skepticism. Your decision-makers need enough fluency to ask the right questions and interpret the answers.
When I work with healthcare organizations, the ones generating the most value from AI share a common trait: they invested in education for their people and strategy for their organization before they invested in tools. They understood the terminology. They knew what questions to ask. They had a governance structure in place. And when they did make a purchasing decision, it was grounded in a real understanding of what they were buying and why.
If you're feeling the pressure to act on AI, start here. Educate yourself and your leadership team. Build a lightweight governance framework. Define who owns AI decisions. Then evaluate tools from a position of knowledge and clarity.
The organizations that rush to buy will spend the next several years cleaning up the consequences. The organizations that invest in understanding first will make better decisions, adopt more effectively, and generate measurable value faster.
![[interface] image of an ai software interface with modern features](https://cdn.prod.website-files.com/69adb37e2a2f75c0ed27c1f0/69b87e316ff3790e9c123b5c_Henecorp%20-%20Governance%20SteerCo.jpg)
