akasa
AKASA
June 16, 2025

The Gist

The U.S. delivers some of the best medical care in the world — but getting paid for that care is a different story. In this episode of the Raising Health podcast, AKASA CEO and co-founder Malinka Walaliyadde joins a16z’s Julie Yoo to dive into the complexity. With real-world results, strategic partnerships, and a vision to bridge providers and payers, this is a must-read (and listen) for anyone curious about how AI is transforming healthcare behind the scenes.

In this edition of Raising Health, a16z Bio + Health General Partner, Julie Yoo, sits down with Malinka Walaliyadde, CEO and co-founder of AKASA, to unpack one of the most complex — and consequential — challenges in American healthcare: the revenue cycle.

While the U.S. is home to world-class clinical care, the administrative infrastructure that supports it often lags far behind. At the heart of this dysfunction is the way we pay for care — and, more specifically, how providers get reimbursed. As Walaliyadde explains, the process of telling the “patient story” to payers through things like prior authorizations, medical coding, and claims submission has historically been dense, manual, and fragmented.

But that’s changing. With the emergence of large language models (LLMs), AKASA is bringing intelligence and automation to the most complex part of the revenue cycle.

From translating 50,000-word patient records into billable codes, to outperforming human coders with LLMs, Walaliyadde shares how AKASA is applying deep domain expertise and cutting-edge technology to one of healthcare’s most costly bottlenecks.

Listen to the podcast or read on for a full transcript.

 

Introduction

“American medicine is the best in the world, but the American healthcare system is not.”

That’s how Malinka Walaliyadde, CEO and co-founder of AKASA, kicks off his conversation with a16z Bio + Health general partner Julie Yoo. For Malinka, the heart of the problem isn’t the care itself — it’s the complexity of how we pay for it.

And that complexity, especially in the revenue cycle, is where AKASA is building.

At the core of AKASA’s business is a complex issue: medical coding. Before LLMs, this was a wall of impenetrable data. But AKASA is using AI to cut through this complexity, aiming to simplify for patients, providers, and payers alike. Let’s get started.

Malinka Walaliyadde

What is revenue cycle?

The easiest way to think about it is the process by which health systems get reimbursed for the healthcare services they provide. I like to say that American medicine is the best in the world, but the American healthcare system is not. Right?

There’s a pretty big gap between those things, and a lot of that gap, I think is because of how complex we have made paying for healthcare in the U.S. By improving the revenue cycle, we can close that gap. We can reduce all this friction that we’ve created that takes countless hours for providers that creates so much confusion and surprises for patients.

So that’s what the point of revenue cycle is: to make that process better.

I often like to make an analogy here to the self-driving car world, where if we could have built entirely new roads for self-driving cars and built entirely new infrastructure, we would’ve had self-driving cars way sooner. We have them now and they’re incredible. But for a long time, people are like, when will this happen?

And I think a similar thing applies here. We’re probably not going to reinvent the entire healthcare payment rails of the U.S., but with sufficiently advanced technology, with these incredibly powerful LLMs that we have, we actually can still distill simplicity out of all of this complexity.

Julie Yoo

Okay. So give us one example of a use case that AKASA tackles in this revenue cycle morass. And then, specifically, how do LLMs play a role?

Malinka

If you zoom out enough on revenue cycle, it’s all trying to do actually one thing, which is communicate the patient story completely to the payer.

We call it many different things based on where we’re on the revenue cycle. We call it prior authorization, coding, or denials, or something else. But really all of it is the same thing, which is communicate the full patient story.

Julie

The simplest form of that is like the billing code? What diagnosis the patient has, what medical coding service was rendered.

Malinka

Exactly. And, in order to communicate that story well, you actually need to understand deeply the medical record. Before LLMs, we really couldn’t, we couldn’t do this. It was this impenetrable wall of text basically for software.

Now you can actually make sense of it with software. And that unlocks so much that’s now possible in the revenue cycle that wasn’t. And what we do at AKASA is we have developed internal models that understand clinical data, that understand financial data. At the model layer, we pull that together with copilots that sit on top of these models that help staff do their work more effectively and more efficiently.

And, to be more specific, different parts of the revenue cycle are accelerating in slightly different ways with LLMs. We think the mid-cycle, the specific part around medical coding and clinical documentation integrity, this is the part where that visit is actually converted into a set of codes that is then sent to a payer.

(Uncover why the mid-cycle needs to be the top priority for health systems to focus AI efforts.)

We have found that that part is being accelerated the most because it has been previously the most complex. Overall, it creates the most value we think for a health system. So that is where we as a company have been the most focused.

Julie

Is there a certain thing that’s occurred more recently about that landscape that has triggered a “why now” around why it is so important to be applying a technology-based approach to revenue cycle in the last few years? As opposed to, for the last few decades, we’ve been getting along using manual labor, using other offline processes to do this. Was there any sort of externality that created this “why now” around the application of AI?

Malinka

I think there’s some very industry-specific things (and I won’t get into the jargon in a ton of detail) like moving to ICD-10 from ICD-9. This is like the number of different ways you can code and describe something has increased multiplefold.

And so it means every part of the process has now become substantially more complex. But the payer side in particular has been adopting some of these technologies faster, I would say. So what we have been focused on is enabling the provider side to be able to participate on the same landscape.

So those are some of the things that we are seeing. First, the types of care we’re able to deliver now are so much more sophisticated than before. We can do so many more things now. And so it becomes particularly important to have these types of tools to help.

Julie

Yeah. I always end up using very belligerent language when talking about the revenue cycle.

But it’s kind of an arms race where one side is just ahead of the curve and therefore we have to arm the other side with the appropriate tools just to keep up. And therefore, given everything that you’re describing, this whole AI as applied to the revenue cycle, is quite a hot area these days, at least in our world.

AKASA is a very purpose-built, pure-play approach to the AI for revenue cycle space. You also have legacy, BPOs basically trying to infuse AI into their own platforms. You also have like lots of other AI companies who are saying, okay, I’ve started over here in some other domain, like clinical or what have you. And now I’m converging on revenue cycles just because it’s such a huge problem space.

What’s that been like for you to be kind of in the center of that storm? And where do you see the chessboard going in the next five years in terms of the players in this space?

Malinka

It’s been really interesting seeing this landscape evolve exactly as you described. And there’s an interesting analogy you can draw to actually the FinTech world. Right? Where you have lots of companies that started in different places. Like you had companies that started in travel management and then start at expenses. There were companies that start in expenses. Then travel management and maybe also payroll. There are companies that did payroll and then did a bunch of other things.

The interesting thing is it’s a big enough problem, frankly, that all of these companies in that world have been enormously successful.

Julie

You and I have talked a lot about sort of this notion that the claim is like the unit of logic. That sort of underpins this entire end-to-end, rev cycle.

And, as engineers, we look at that and we’re like, “oh, that’s a computer science problem.” You can solve that through technology. If you have information on both sides of that equation, you pretty much could see a universe in which you automate that full end-to-end process.

But that’s obviously much easier said than done.

Do you envision a world in which we could effectively automate the entire end-to-end process? Even eliminate the claim frankly, and create these sort of like TSA pre-check gold card-type programs across the entire spectrum. What’s between us and that vision of how this whole system could play out?

Malinka

Part of it is a technology problem, and part of it is a sort of structural social item as well, right? You actually need the payers and the providers to really coordinate super well with each other.

What often ends up happening is you can build a system that is very good for the set of policies in play at any given moment, but then you have rapid change on one side (usually the payer side), where they’ll change how they pay for things. You either have to be able to react very quickly, which is what we’re working towards. Or there just needs to be better communication between the two sides.

So those are some of the things that can make something like that challenging. You can still accommodate that though with technology that can adapt really, really quickly. And to adapt really quickly, it means you have to process really quickly and understand records quickly.

And, again, none of those things were possible until LLMs, so we can actually start talking about some of those things now, when we really couldn’t before.

Julie

The last couple of years have been so dynamic, right?

In terms of what’s happening in the foundation model world and how quickly that whole space is moving. Every week there seems to be some new, tectonic shift in the capability set of these foundation models across multiple modalities. How have you guys dealt with that? What’s been good?

I mean, I think there’s some obvious things that probably are really good for us. What’s been bad and challenging, and do you think there’s potential negative effects of the space moving so quickly around us? How have you guys kind of leaned into that whole universe?

Malinka

It is an age of miracles. I don’t think a few years ago we would’ve so easily breezed through the Turing Test, and no one even noticed. It’s like kind of wild. But we have set up our company to benefit from advancement that these folks are making.

So, at AKASA, we use LLMs in two ways. We have both internally tuned models, internal models that we develop. We also do use commercially available LLMs. Now our internal core decision-making is typically based around our internal models.

And we did test the commercially available ones for those use cases as well. So, for example, for the act of medical coding, we did do that. It just wasn’t very good because it turns out that to solve these problems, you actually need very specific types of data sets to solve the specific task at hand. Right? And this data just does not live on the internet. Much of it is obviously PHI, so it just cannot live on the internet.

And not only do you need the very, very specific types of data, you also need to structure it in a certain way to make it amenable for LLMs to internalize. So when these foundation model companies come up with new advancements, they will typically start by making those available for their commercial APIs.

And when they do, great, we plug those into the parts of a stack that benefit from that. But then the nice thing about the industry is they’ll typically then also open source these things, right? And then when that happens, we then re-base our internal models on those open-source models and build on top of those.

So we’ve set ourselves up in a way to benefit from these dual tailwinds to improvements in both the open-source and closed-source worlds.

Julie

You work with some of the most complex, largest, most sophisticated health systems in the country. You just announced a couple of really amazing deals. And you have this great map internally of the 24 end-to-end steps in the revenue cycle. So there’s a lot of surface area that you could be covering.

Where did you start? How did you pick the first couple of areas where you’re starting? And, given the fact that pretty much every provider in the country already has like pretty established revenue cycle processes, what’s been like the insertion motion for you to kind of get in there.

Malinka

Great point. So, we just announced a deal with Cleveland Clinic, which is an incredible organization. They’ve been a great partner to work with, and in particular we’ve been working with them on our medical coding AI and deploying that there. And so this mid-cycle medical coding has actually been where we have been the most focused.

So, just to provide context on the problem at hand, so within coding there’s a few different types. We focus on inpatient coding. This is the most complex type of coding. It is a coding for multi-day stays for a patient. And it is also what drives the majority of the value and revenue for a health system. It’s 50/60%+ of revenues just from this domain.

I was actually listening to a podcast you did a little bit ago with Seth from Epic, which was a great podcast. And I remember him saying: some of these things are so long, it’s like we have them read To Kill a Mockingbird.

And I was like, huh, that is an interesting unit of measurement. So I went and looked at how long is it for the coding task? Right? And it’s half of To Kill a Mockingbird.

To Kill a Mockingbird is about a hundred thousand words. The typical encounter that one of our inpatient coders has to go through to do their work is about 50,000 words. And then they have to read those 50,000 words and they have to convert it, pick a set of 19 codes to represent that patient story, out of 140,000 codes.

This is wild. Right? This is like such an incredibly hard task. So that’s the type of problem we’re talking about. And they’re expected to do at least two of these an hour. Obviously, AI should be able to help.

So we did try to solve this problem pre-LLM with the sort of state-of-the-art transformers at the time, but they were just smaller parameter count, all of that. And it worked okay, not great. And so we didn’t end up commercializing that in a big way.

Post-LLM, we tried it again ’cause we said, okay, this feels like it should work now. And it worked remarkably well. And then we actually had a third-party coding auditor review the AI-coded encounters and the human-coded encounters and say what they would prefer. And we were surprised that they actually preferred the AI-coded ones more. We had no idea what would happen. Right. We were hoping to be at least as good. And it turned out not only was it as good in a bunch of the cases, but it was actually better.

So we thought, all right, what if we have an AI reviewer version of this product? Like it’s actually better than human. What if we have it plug in as an AI review and look at the work that humans have done in the past?

And this actually is super beneficial from a go-to-market perspective. Because, to your point of how do you plug in, this is such an easy wedge into the market because it means we don’t have to change the existing frontline workflow. We can just come in after the fact and review what they’ve done.

And that’s what we did. It worked incredibly well. The results we’ve seen have been remarkable with health systems. We’re improving the clinical complexity capture rate in as much as 60% in a bunch of these cases.

So enormous value being driven, born out of actually looking at how effective these LLMs are in practice.

Julie

So when you say better, when you say these perform better, so you just alluded to one, which is higher sensitivity on the number of codes that you can code to a given encounter, talk to me more about that. What what are they measuring you on?

Malinka

How accurately are you representing the patient story in the set of codes that you’re using?

Because we found that if you don’t do this correctly, not only are you missing an opportunity to fully capture the story, but you can actually reduce the amount of denials on the backend.

Then there’s this other very sort of esoteric concept in the healthcare world called quality. Like clinical quality. How good was the care delivered, is sort of a simple way to think about it.

And if you don’t code well, you actually risk these quality score. And these quality scores actually directly feed into where health systems rank on various sort of US News World report.

This one thing actually has so much impact in so many different areas. And it turns out that with LLMs we can improve literally across the board and all of that.

Julie

There’s been a lot of talk on the policy stage about how can we implement just more real-time data exchange between various players.

So, specifically, CMS has talked a lot about a world in which providers are somehow incentivized to share clinical data directly with CMS on a relatively near-term, real-time basis. And then, they would sort of guarantee more real-time payments, basically, as kinda the quid pro quo.

In a world where that gets executed, who knows if it will, but in a world in which that kind of paradigm exists, what does that mean for your product? Does that mean that there’s no longer use for it? Or how would you think about the value proposition when there is more liquid data exchange across our industry?

Because a lot of what you’re describing is because we don’t have data liquidity across the various parties we need these sort of interim steps to do this. How does that change in a world of pure data liquidity?

Malinka

I think you still need a way to communicate the type of care that was delivered.

Even if you have data liquidity, you cannot just exchange these very raw records. Someone has to still interpret these things. And the system we’ve come up with around codes is not the best system ever, but it’s a system that exists. And you can actually just do incredibly well around that system to use it to communicate quality. If there’s a different system that comes about that actually does a better job, great. Someone still has to do that job of interpretation, and I would anticipate that we can also do that, if that were to happen.

Julie

You mentioned our friend Seth at Epic earlier. And we’ve had quite a journey, over the last few years in terms of how we play with the EHRs. How are they doing now? Are they playing nice with you? How has that changed especially with the the whole wave of generative AI, specifically in the last couple of years?

And where do you see them play? Where do you see them not playing? And is one of your major competitive threats the risk that EHRs effectively implement this? Or why is it so hard that they couldn’t?

Malinka

This is another one of those things where, again, it’s hard to say things are impossible. But things can be improbable. For someone like them, it’s been impressive to see how quickly they’ve adopted LLMs and those approaches.

They have so much surface area to cover. They’re not just doing this one specific part of coding. They have to cover a bunch of stuff in both revenue cycle and clinical, and a bunch of other areas. And they’re expanding into now payers and pharma and med devices and all these things. So there’s so much surface area to cover that the thing that would make sense is to do the things where you can create a lot of value by doing a lot, even if you’re not mile wide, inch deep kind of approaches. And I don’t mean that in a negative way, but there’s a lot of use cases where you don’t have to go super deep in terms of the penetration of your LLM to do a really good job, and there’s a bunch of use cases like that.

Then there’s others where to create meaningfully create value you actually do have to go a mile deep. There are a number of those I think in healthcare, more than most other industries. And so we as a company are really focused on those areas where building that type of deep domain expertise in a specific area is really key.

And in fact, something that we do that is very novel even in the healthcare LLM world, is we actually tune a model per health system. So we have an internally developed base healthcare model that understands clinical and financial data. But then when we work with any health system, we actually tune a model specifically for that system based on their own data.

I don’t know of many other folks doing this and it’s not a trivial thing to do. But the reason it’s important is every health system has so much nuance in terms of how their providers document and how they code and do all of these other things. And to solve the hardest problems where nuance matters, like inpatient coding, you actually do need to go to that level.

So those are some of the things that it makes sense for us to do because we’re so focused on this specific area. So that’s how I think about it.

Julie

Going back to the concept you mentioned at the beginning, about payers and the fact that they tend in general to be a little bit ahead of providers in terms of adoption of these kinds of technologies, and they were kinda the first to set the race on this back and forth.

Is the biggest form of AKASA possible without engaging with payers? Or how do you see your role in taking all the intelligence that you have gathered about really how to optimize this process and take out a lot of the crux that exists today based on how it’s done and potentially close the loop and enable a way for payers and providers to actually collaborate more closely.

Malinka

I do think the biggest version of it does include payers. But it’s a staging exercise. We ultimately have decided to focus for now in the near term at least on really enabling providers. And those are always going to be our primary stakeholders.

But there will be many opportunities moving forward where the intelligence that we have built and our knowledge of the provider side is gonna be incredibly helpful for both payers and providers if we were to be a rail between them so that they can communicate through us. That’s something we’re working toward.

Julie

One of the other trends that we’re observing in the AI-for-provider space is this whole notion of these kind of governance platforms. There were some nonprofit efforts to do this, right? These kind of consortia that we’re creating these assurance labs and whatnot, to say, “Hey, we will be the validation arm of your organization to ensure that the models that you’re using in the clinical and administrative setting are valid.”

There’s now even dedicated VC-backed companies that are positioning themselves as these kind of clearinghouse platforms. What’s your take on that?

Malinka

I think it is very hard to make a meaningful dent unless you are all in to what you are doing.

And I think that’s the beauty of startups, right? Because every single person at a startup is all in to solving this specific problem at hand. And something I find sometimes challenging with these sort of groups is it’s like the side thing they’re doing while they’re doing some other thing.

And I think it’s been very hard for these folks to make progress because it’s hard to be all in. It’s also hard to do things by committee, right? I think the mission makes sense, but I don’t know if that’s the right way to execute it.

Julie

The trope in venture capital was always that in every boardroom, everyone would always ask “what if Google did this right back in the day.”

Now, the common question is like, what if tomorrow we wake up and OpenAI has announced that they have shipped a purpose-built healthcare specific version of their LLM platform? What would that mean for AKASA?

Malinka

People have asked that question. But then we should look at the same efforts of those big tech companies in healthcare even today, right? People kept talking about how Google and Amazon and others would dominate the healthcare world. And they haven’t been able to.

And I think the reason it has been hard for them are the reasons it will be hard for the OpenAIs and Anthropics as well. It comes down to focus. And like really deep domain understanding in this world. And, actually, a similar thing to what we were just talking about earlier on.

When you can create value by being a mile wide, right? And these folks can create a ton of value by being a mile wide, and they should do that, right? And there are so many use cases where you can do that. There are, in healthcare more than any other industry, I think there are many places where you kind of have to be a mile deep to actually create value.

But then, beyond that, let’s talk about the data sets at hand. We found that even using generally clinically trained LLMs — because we tried things like Med-PaLM and stuff that already exist today — it really doesn’t work as well as you would think, even if it’s trained on just general clinical data. Because you actually need very specific and somewhat esoteric data sets that are specific to the actual problem at hand to solve these problems really, really well.

Julie

Given how quickly this whole LLM space has been evolving in the last couple of years, one of the biggest bottlenecks has been talent.

There are actually still today only a finite number of humans on the planet who actually have an application understanding of how to build products from these LLMs. How have you navigated that, and how are you winning the battle for this very scarce talent?

Malinka

So, counterintuitively, I think being an LLM native company actually makes it easier for you to literally create more of that talent.

We don’t have to go find it, actually, we can just literally create it. So you might think if you’re an LLM native company, you have so many roles to fill it must be hard. It’s actually easier because you now have a critical mass of people that actually know this stuff well.

And we as a company are not applying LLMs, sprinkling LLMs on top for marketing purposes. Truly, the core of the product is an LLM. And every day, everything that people work on is touching LLM work.

Everyone at the company has so much exposure to it that we have machine learning researchers, machine learning engineers, and software engineers, but our software engineers are doing work that at many places would be considered ML or LLM work. And the reason that’s the case is because it’s so core to the product. And so we have the ability to mentor and teach and, all of these things, folks at the company to actually do things that would be incredibly hard to get access to anywhere else.

I think if you’re a company where you are not focused on LLMs and it was a small side thing, even though you don’t really need that many people, it’s actually harder for you to do that. Because you cannot actually create that type of talent at the company.

So folks that do want to join and do these things, we have created an environment where it is really compelling for a high-quality LLM engineer or software engineer who wants to work on LLMs to work at, because we have this combination of things where we have access to a substantial amount of GPU compute that we have invested in.

If you want to work in healthcare in particular, we have access to really hard-to-get healthcare data sets that can be really meaningful.

(Discover why one of our top engineers joined AKASA.)

And then, finally, we work with some incredible customers? Like, Cleveland Clinic, Johns Hopkins, Stanford, that we have a really great partnership with, where stuff you build, you can literally deploy the next day.

If you want to really build meaningful things, you’ll want to find places like what we’ve created. Where there’s such a tight synergy between what you build and seeing what you build in practice. And there’s also certain folks that maybe don’t care as much about what happens in practice. And this is probably not a good place either. So that’s how we think about solving for talent.

(See open roles at AKASA.)

Julie

Well, Malinka, I think what you’re working on is one of those spaces that it’s obvious, but very, very hard to execute, which sometimes make for the best types of problems to build companies against. So congrats on all your progress, and thank you for sharing all your thoughts about what’s latest and greatest in AI for revenue cycle.

You may also like

Blog Resource
Jun 16, 2025

From Chaos to Clarity in Data With Malinka Walaliyadde

In this edition of Raising Health, a16z Bio + Health General Partner, Julie Yoo, sits down with Malinka Walaliyadde,...

Blog Becker’s: Interview with AKASA Co-Founders Hero Image
Jun 17, 2025

The Future of AI in Healthcare: How AKASA Is Transforming Revenue Cycle Management

The Becker’s Healthcare Podcast is devoted to the people who power U.S. healthcare. In this episode, Scott Becker (founder...

Blog Resource
Feb 10, 2025

Unlocking the Promise of Generative AI

It wasn’t long ago that generative AI (GenAI) was little more than a concept. Now it’s here, revolutionizing revenue...

Blog Resource
Dec 1, 2024

Generative AI and the Revenue Cycle: Best Practices From Stanford Health Care and AKASA

Generative AI, a subset of artificial intelligence, is used extensively in consumer-facing industries such as retail, travel, and hospitality....

Blog Resource
Mar 14, 2025

Future-Proofing Your RCM Team

To improve efficiency, save costs and elevate staff productivity and satisfaction, hospitals and health systems are automating parts of...

Find out how AKASA's GenAI-driven revenue cycle solutions can help you.