What I've Learned About AI Adoption in Healthcare From Sitting in Three Different Chairs
I've approached healthcare AI from three different angles, and each one changed how I think about it.
First as a Director at El Camino Health, where I was accountable for enterprise applications supporting over 7,000 users. I evaluated vendors, managed implementations, and lived with the consequences of adoption decisions. I knew what it felt like when a tool didn't land with clinical staff, and I knew what it took to build enough trust to get one that did.
Then as COO of Bastion Intelligence, a HIPAA-compliant AI platform built for healthcare organizations. Suddenly I was on the other side of the table, understanding what it takes to get providers to actually adopt a product, and how wide the gap can be between a compelling demo and real-world use.
And now through Henecorp, where we advise both health systems and healthcare AI companies on strategy, governance, and implementation.
Those three chairs have given me a perspective I find hard to get from reading about this topic. So here's what I actually believe about AI adoption in healthcare, based on what I've seen work.
The hesitation is rooted in accountability
The hesitation I see in healthcare AI adoption is rooted in accountability. When something goes wrong in a health system, someone is responsible. Patients don't get second chances on clinical errors. That reality shapes every technology decision made inside a provider organization, and it should.
The concerns I hear most often are legitimate. Hallucinations, data bias, data poisoning, audit risk. These are the kinds of problems that end careers and harm patients. Any honest conversation about AI adoption in healthcare has to start by acknowledging that.
What's changed is that the industry has gotten serious about addressing them. Audit trails, transparent outputs, data residency controls, and HIPAA-compliant architectures have all matured significantly. Organizations that build validation into their adoption process from the start can catch problems early, learn quickly, and build real confidence over time.
What I've seen work in practice
The front-running healthcare organizations I've watched adopt AI successfully share a few things in common. They started in areas where the feedback loop was fast and the stakes were contained. They let domain experts validate outputs in real time. And they treated early limitations as information rather than indictments.
Tools like BastionGPT have been part of that story for some organizations. A HIPAA-compliant environment where providers can interact with AI directly, on their own terms, without worrying about data leaving a secure perimeter, removes one of the biggest psychological barriers to experimentation. Providers who might have been skeptical of AI tools generally find it easier to engage when they control the interaction and trust the environment.
That first safe experience matters more than most people realize. Once a clinician or administrator sees AI work accurately in their own workflow, the conversation about broader adoption changes completely.
Where teams can start, by role
The most practical advice I can give is to start where domain expertise provides a natural quality control layer. Every role that validates AI outputs is also teaching the organization something about where the tool works and where it needs guardrails.
Executives can start with drafting, summarization, and calendar analysis. Low patient impact, immediate feedback, and genuine time savings that build firsthand confidence before scaling decisions get made.
Clinicians can start with AI-assisted documentation and note drafting. A physician reviewing an AI-generated note knows immediately whether it captured their clinical thinking accurately. That real-time expert validation is one of the most powerful adoption mechanisms available.
Revenue cycle teams can start with prior authorization and denial management. AI surfaces patterns in claim rejections faster than any manual review process. Staff focus their expertise on the appeals and exceptions that require genuine judgment, and the volume work gets handled.
Supply chain teams can start with demand forecasting and inventory management. The patterns driving stockouts and waste are well-suited to AI analysis. Procurement professionals focus on vendor relationships and strategic decisions.
Quality measures and compliance teams are where I see the most compelling long-term shift. Abstractors and compliance staff spend significant time pulling clinical data, chasing documentation gaps, and following up on records. AI handles that work well. When it does, those professionals move toward prevention, outcome monitoring, and proactive patient intervention. The expertise they've spent years building gets applied to the work that actually changes patient outcomes. That's a shift I've seen start to happen in organizations that committed to AI-assisted abstraction early.
Validation is what makes adoption work
Validation deserves more credit than it typically gets in AI adoption conversations.
When a revenue cycle specialist reviews an AI-flagged denial pattern and confirms it, that builds institutional confidence that the tool works. When a quality abstractor catches an AI documentation error and flags it, the organization learns exactly where the guardrail needs to be stronger.
Domain experts validating AI outputs in real workflows are doing something irreplaceable. They're providing expert judgment at the moment of application. That's how trust gets built, and trust is what scales adoption.
Revisit what didn't work before
AI capabilities are improving faster than any technology cycle I've seen in 15 years of healthcare IT. A tool that hallucinated frequently 12 months ago may have addressed those issues. A tool that couldn't integrate with Epic last year may have a certified integration today. An approach that felt premature 18 months ago may be ready now.
Organizations that dismissed tools based on past performance and never revisited that decision are leaving real value on the table. Build in a regular review cycle, quarterly or twice a year, and go back to the tools and approaches that didn't make the cut. Have the vendor walk you through what changed. Run a small pilot on the specific failure mode that concerned you.
The field is moving fast enough that past experience is a starting point for evaluation. Revisit decisions regularly as the technology matures.
How we think about this at Henecorp
The framework we bring to every AI adoption conversation starts with the same question: where can this organization start in a way that keeps them in control?
Control means validation mechanisms. It means domain experts in the loop. It means audit trails and data governance. It means HIPAA-compliant environments where providers can experiment safely. And it means building in the humility to revisit past decisions as the technology matures.
Healthcare AI adoption will take time. The accountability structures that make healthcare work also shape the pace of change. Organizations that move well find a safe starting point, validate rigorously, learn from what they see, and keep moving.
Where's your organization's safest starting point?
![[interface] image of an ai software interface with modern features](https://cdn.prod.website-files.com/69adb37e2a2f75c0ed27c1f0/69e28c725dd32a559056a7f1_Henecorp%20Blog%20Post%203%20Chairs%20.png)
