80% of health systems are taking action on GenAI in the revenue cycle. READ THE NEW REPORT.
magnifying glass on documents
Blog

Why This Coder Finally Believes in AI

How a veteran coding leader went from skeptical to convinced that GenAI can finally elevate accuracy, quality, and coder confidence.

magnifying glass on documents
Blog

Why This Coder Finally Believes in AI

How a veteran coding leader went from skeptical to convinced that GenAI can finally elevate accuracy, quality, and coder confidence.

The Gist

With nearly three decades in coding and revenue cycle operations, Mindy Harris, RHIA, CCS, CDIP, CPC, CPMA, CRC, has seen every generation of coding technology — and every one of its shortcomings. In this post, AKASA’s director of coding shares why she finally believes generative AI is different. Drawing on her experience across critical access hospitals, large health systems, and national RCM vendors, Harris explains how GenAI can interpret the full clinical record, strengthen accuracy, and become a true partner to coders rather than another tool they have to fix. A must-read for leaders shaping their mid-cycle strategy.

When you’ve spent more than 30 years in medical coding as I have, you develop a healthy skepticism about technology.

Coders don’t get to rely on marketing language or demo theatrics. The work is too detailed, too high-stakes, too dependent on accuracy. If a tool gets something wrong, even subtly, it’s the coder and the health system who bear the consequences.

So, when generative AI (GenAI) entered the conversation, especially in the context of inpatient coding, my initial reaction wasn’t one of excitement. It was curiosity with guardrails. I wanted to see every suggestion, every rationale, every edge case — and I wanted to verify it for myself.

Years of CAC-trained reflexes make coders skeptical, and for good reason

Anyone who has used CAC tools knows why coders hesitate. So many systems surface half-supported guesses, misleading highlights, or long lists of codes that lack documentation. Coders end up deleting everything and coding from scratch because it’s faster — and safer — than trying to sort the helpful from the harmful.

That muscle memory is hard to shake.

Which is why I was stunned when I joined AKASA and learned the team wasn’t starting with outpatient or radiology or other “easy-win” domains. They were training GenAI to support inpatient coding — the very area in which CAC systems have historically struggled the most. Honestly, I didn’t expect any company to tackle inpatient first; most vendors avoid it because the complexity is so high.

A 50,000-word inpatient chart is a universe of detail: notes, orders, labs, consults, complications, clinical progression, procedures, and the evolving story of a patient’s entire stay. Coding it well takes experience, judgment, and a brain that can hold more context than the guidelines alone can capture.

Trying to build AI for that felt almost audacious. There’s a reason most companies stick to radiology or simple outpatient visits — those areas are predictable. Inpatient coding is where nuance lives.

And yet, the first time I saw the model in action, something clicked.

Not dramatic revelations, but consistent justifications

Harris presenting at AHIMA about GenAI

What changed my perspective wasn’t that the AI found one shocking code I’d completely overlooked. It was that every suggestion came with complete, evidence-backed justifications — actual references to documentation, indicators, and clinical patterns pulled from across the chart.

Not vague explanations. Not generic, rules-based snippets. Detailed, chart-specific reasoning showing exactly why a code was being proposed.

This was the moment everything shifted.

Because coders don’t just need suggestions — we need proof. We need the “why” behind every recommendation. That’s how we validate accuracy. That’s how we build trust. That’s how we maintain compliance.

And the model didn’t just produce justifications occasionally. It did it consistently:

  • linking findings from labs to physician notes

  • connecting clinical evidence to related diagnoses

  • showing how indicators supported an MCC or CC

  • highlighting clinical clues that matched coding guidelines

  • pulling in addenda or late documentation that often gets missed

I wasn’t looking at guesses. I was looking at a system that had actually read the chart.

Want to know more about this technology? Here are 10 Things Healthcare Leaders Need To Know About LLMs and Generative AI.

This is the difference coders have been waiting for

The power of GenAI isn’t just that it can ingest the entire record. It’s that it can explain itself. That’s the piece CAC tools never mastered.

Coders rely on rationale. Auditors ask for rationale. Payers demand rationale.

Without justification, a suggestion is noise. With justification, it becomes a second set of eyes.

And for inpatient coding — where every detail matters and missing a secondary diagnosis can shift a DRG, a quality measure, or a denial — that layer of transparency is everything.

Read this blog about Why Medical Coding Needs Generative AI.

Changing what coding work looks like

There’s understandable anxiety in the market about AI taking coding jobs, and we should be honest about what’s actually happening: some types of coding will change.

Routine, low-complexity coding — the encounters with short documentation and predictable patterns — may eventually be handled almost entirely by AI.

And that’s actually a good thing.

Those tasks were repetitive and didn’t require the level of judgment that experienced coders bring. They were “starter” jobs, not the work that makes coders indispensable.

As that work shifts to automation, coders move further into what they’ve always been trained for: auditing, validating, interpreting nuance, resolving complexity, and protecting compliance.

This is the future:

  • AI handles the routine

  • Coders oversee the quality

  • Edge cases, rare conditions, gray zones, and clinically intricate encounters stay squarely with human expertise

Health systems already struggle to hire and retain coders, especially for complex roles. GenAI doesn’t remove coders — it frees them to do the work only humans can do. The work that actually strengthens revenue integrity. The work that ensures clinical intent is accurately represented. The work that keeps the entire mid-cycle moving forward.

The job doesn’t disappear. It evolves — and becomes more meaningful.

GenAI finally gives coders what we’ve needed all along

Not shortcuts. Not guesswork. Not rules-based automation pretending to understand documentation.

A system that actually reads the record, understands the clinical story, and shows its work — every time.

For the first time in decades, coders are no longer being asked to trust technology. We’re being shown why we can.

Want to see how this actually looks in practice? AKASA Coding Optimizer uses GenAI to read the entire clinical record, surface accurate and complete code suggestions, and provide clear, evidence-backed justifications for every recommendation — all within existing workflows.

If you’re looking to strengthen accuracy, reduce denials, and support your coding team as their work evolves, Coding Optimizer is a strong place to start.

Mindy Harris
Mindy Harris
Mindy Harris

Dec 4, 2025

With nearly three decades in coding and revenue cycle operations, Mindy Harris, RHIA, CCS, CDIP, CPC, CPMA, CRC, has seen every generation of coding technology — and every one of its shortcomings. In this post, AKASA’s director of coding shares why she finally believes generative AI is different. Drawing on her experience across critical access hospitals, large health systems, and national RCM vendors, Harris explains how GenAI can interpret the full clinical record, strengthen accuracy, and become a true partner to coders rather than another tool they have to fix. A must-read for leaders shaping their mid-cycle strategy.