80% of health systems are taking action on GenAI in the revenue cycle. READ THE NEW REPORT.
Blog

What Happens When You Review 100% of Charts

A CDI veteran on why sampling in the mid-cycle is no longer enough — and what happens when you finally see everything

The Gist

CDI teams have typically relied on sampling to understand performance. But what happens when you start seeing everything?

In this piece, Linda Schatz, RN, BSN, CCDS, director of CDI at AKASA, shares what she’s learning from health systems reviewing far more of their encounters — and why it’s changing how leaders think about coding, CDI, and quality. The biggest surprise isn’t just missed revenue. It’s the variation, inconsistency, and blind spots that sampling never reveals. As clinical records grow more complex, the question isn’t how many charts you can review. It’s whether you’re seeing what matters.

This blog explores what full visibility uncovers and why it’s becoming essential for modern CDI teams.

For years, CDI and coding teams have relied on sampling.

We review a subset of charts. We audit a percentage of cases. We use that to estimate accuracy, identify trends, and guide education. It’s how most of us were trained to think about performance.

And for a long time, that approach worked.

But the environment we’re operating in today is very different.

Clinical records are longer. The expectations on CDI teams are higher. And the number of things we’re responsible for — quality metrics, risk adjustment, compliance — has grown significantly.

At some point, you have to ask yourself:

Is a sample still enough to understand what’s really happening?

Because what I’m seeing across organizations right now is this: It’s not.


The moment everything changes

I was working with a health system leader recently who oversees coding, CDI, and revenue integrity.

She said something that stuck with me:


“I thought I knew what my teams were doing. I didn’t.”


That wasn’t a comment about effort or talent. It was about visibility.

When you’re only reviewing a portion of charts, you’re making assumptions about the rest. You’re taking what you see in a sample and applying it to the entire operation.

But when you start looking at everything — or even get close to it — you begin to see something very different.

You see variation. Inconsistency.

And you start to realize how much is happening outside of what you typically review.

Want to learn how Cleveland Clinic is tackling these challenges? Check out these takeaways from a webinar I recently co-hosted with them at ACDIS Virtual Summit.


What shows up when you see everything

One of the first things that becomes obvious is how differently people approach the same work.

Two experienced CDI specialists can review similar cases and come to different conclusions. One identifies a documentation opportunity. The other doesn’t. One interprets the clinical picture one way, while someone else sees it differently. One knows the intricacies of Elixhauser and the other doesn’t.

That’s not because one person is right and the other is wrong.

It’s because the work has become incredibly complex. And we’ve never had a way to consistently see how decisions are being made across the entire team.

In another organization I work with, a leader told me that within just a few weeks of expanding their review, they started uncovering patterns they had never seen before. Not isolated issues, but repeatable ones. Missed opportunities, inconsistent decisions, and areas where teams needed more alignment.

None of that showed up in their sampling.


The limits of how we’ve always done it

Sampling made sense when charts were shorter and the scope of CDI was narrower.

Back then, you could review a portion of cases and feel confident that you understood what was happening.

That’s not the reality anymore.

Today, a single inpatient encounter can require evaluating documentation across multiple dimensions at once — clinical validity, coding accuracy, quality measures, and more. The record itself can span dozens of pages, with critical information buried in different places, using different terminology.

We’re asking people to quickly and consistently synthesize all of that.

And then we’re trying to measure performance based on a small slice of it.

The gap between those two things is where problems start to show up.


What full visibility actually gives you

When organizations begin reviewing more of their encounters, the biggest shift isn’t just financial.

It’s operational.

You start to understand:

  • Where opportunities are being missed

  • How decisions vary across individuals

  • What’s actually driving inconsistencies

  • Where education is needed

One leader I’ve worked with put it this way:


“I don’t need to act on everything. But I want to see everything.”


That mindset is important.

Because once you have visibility, you can make better decisions about where to focus. You can prioritize what matters most. And you can start aligning coding, CDI, and quality more intentionally.

I used to not believe in the abilities of this new wave of AI. Then I had an aha moment that changed everything. Read this blog post about it.


This isn’t about replacing CDI

Whenever we talk about reviewing more charts, the natural question is whether this replaces CDI work.

In my experience, it does the opposite.

It allows CDI professionals to focus on the part of the job that actually requires expertise: clinical judgment, interpretation, and decision-making.

Right now, a lot of time is spent searching. Looking for the right terms. Trying to piece together information across different parts of the record. Making sure nothing was missed.

But as the volume and complexity of documentation continue to grow, that kind of manual review becomes harder to sustain.

No matter how experienced someone is, there are limits to what any one person can consistently catch.

What we’re seeing now is that technology can help surface the signals — things that might otherwise be buried or overlooked — so that CDI teams can focus on evaluating and acting on them.

That’s a very different kind of workflow.


Why this matters now

The role of CDI has expanded significantly over the last several years. It’s no longer just about DRGs. It’s about quality, risk, compliance, and ensuring the clinical story accurately reflects the care delivered.

At the same time, many organizations are trying to bring together functions that historically operated separately: coding, CDI, quality, and revenue integrity.

That’s not a small shift.

One leader described it to me as “everything coming together at once.”

And that’s exactly what it feels like.

But it also creates an opportunity to rethink how the work gets done.

Because when those groups start to align — and when you have visibility into what’s actually happening across all of them — you can begin to address issues at the root, not just react to them downstream.


A different way to think about the problem

The organizations that are making progress right now aren’t trying to review every chart manually. They’re asking a different question.

Not, “How many charts can we review?”

But, “How do we make sure we’re seeing what matters?”

That shift changes everything. Because it moves the focus away from volume and toward insight.


What this means in practice

If you’re still relying primarily on sampling, it may be worth taking a step back and asking:

What aren’t we seeing?

Because in today’s environment, the biggest risks — and the biggest opportunities — are often the ones that never make it into the sample.

And once you start to see more of the picture, it becomes very hard to go back.

This is exactly the challenge we’re working on with health systems at AKASA.

Not how to replace CDI teams, and not how to review every chart manually. But how to give teams full visibility into what’s happening across coding, CDI, and quality, so they can focus on what matters most.

That means:

  • Surfacing opportunities across 100% of encounters

  • Highlighting where documentation and coding are misaligned

  • Helping teams prioritize where to act

Because the goal isn’t just to find more. It’s to make better, more consistent decisions across the board.

If you’re starting to think about this differently, or just want to compare notes on what you’re seeing in your own organization, I’d love to hear from you. Send me an email, and we can chat.



Linda Schatz
Linda Schatz

Linda Schatz, RN, BSN, CCDS, is the director of CDI at AKASA. With more than 40 years of experience in healthcare, and the last 15 focused on documentation and coding, Schatz has built a comprehensive career. She started her CDI career as a Medicare auditor and then dove into the CDI space, moving into consulting roles across multiple firms. Schatz has implemented CDI programs across the country, provided CDI education, and spent a large portion of her consulting career in provider education related to documentation integrity at the Advisory Board Company. She has also been the corporate director at a large health system, moving the organization to a centralized program focused on quality. In her current role at AKASA, Schatz acts as a CDI subject matter expert, working with clients and contributing to the development of generative AI-powered CDI tools. She works closely with machine learning teams to improve coding suggestions, manage and audit CDI staff within AKASA, and ensure quality and productivity standards are met.