80% of health systems are taking action on GenAI in the revenue cycle. READ THE NEW REPORT.
magnifying glass on documents
Blog

AKASA AI Recovers Millions for Health Systems by Simplifying Complexity

Vanguards of Health Care podcast by Bloomberg Intelligence

magnifying glass on documents
Blog

AKASA AI Recovers Millions for Health Systems by Simplifying Complexity

Vanguards of Health Care podcast by Bloomberg Intelligence

The Gist

Health systems are under increasing financial pressure, with shrinking margins and complex payer demands. In this episode of Bloomberg Intelligence’s Vanguards of Healthcare podcast, Jonathan Palmer, host and healthcare analyst at Bloomberg Intelligence, sits down with Malinka Walaliyadde, CEO and co-founder of AKASA, to explore how generative AI is transforming the revenue cycle. Malinka shares how AKASA helps health systems reduce friction, capture the full patient story, and unlock millions in accurate reimbursement. From lessons in scaling AI to navigating payer relationships, this conversation offers practical insights for leaders rethinking mid-cycle strategy and preparing for a tech-driven future.

In the following transcript from the latest Vanguards of Healthcare podcast, host and Bloomberg Intelligence analyst Jonathan Palmer speaks with AKASA CEO and co-founder Malinka Walaliyadde about applying AI to healthcare’s toughest operational challenges. They discuss why the U.S. revenue cycle is uniquely complex, how large language models are reshaping coding and prior authorization, and what this means for the financial health of provider organizations. Listen to the podcast or read on for a full transcript.

 

Jonathan Palmer:

Welcome to another episode of Bloomberg Vanguards of Healthcare Podcast, where we speak with the leaders at the forefront of change in the healthcare industry. My name is Jonathan Palmer, and I'm a healthcare analyst at Bloomberg Intelligence, the in-house research arm of Bloomberg.

I'm thrilled to welcome today's guest, Malinka Walaliyadde, the CEO of AKASA. Prior to co-founding the company, Malinka was a partner at Andreessen Horowitz, or as many know it, a16z, where he helped build out their healthcare investment team. I'm looking forward to learning how his time in venture helped shape the vision for a company pioneering the use of AI to solve financial and operational complexities in revenue cycle. Welcome to the podcast.

Malinka Walaliyadde:

Thank you, Jonathan. I'm thrilled to be here.

Jonathan:

Why don't we start off with your background and maybe set the stage? Give us a quick mission statement of AKASA, and then maybe let's rewind and start with how you got into venture, and what was the origin idea for founding the company?

Malinka:

The quick summary of AKASA is we reduce friction in the financial back-end of healthcare using AI. This is a domain that has some incredibly challenging problems that require combining very, very large financial and clinical data sets. But our ability to solve these problems well has meant that the most progressive institutions in the world, like Cleveland Clinic, do extensive surveys of all the companies that solve these problems and have picked us to be their partners in solving it with them. So that's what we do at a high level. Happy to dive into some other parts of your question. (Read more about our work with Cleveland Clinic.)

Jonathan:

So, your background: You started in industry and then you moved into venture. What drew you to the venture community, and what drew you to healthcare initially?

Malinka:

Sure. Maybe on healthcare: I've always been interested in healthcare, initially from a more biotech perspective. Early on, we were gaining the ability to program DNA like software — you can actually write genetic code, print out sequences of DNA to express certain proteins, and do things in cells. And that was super fascinating and interesting.

What I eventually realized is that while that is a very important set of problems, with a bunch of important people solving them, you could also tackle broader, larger problems at a societal level in the healthcare provider–payer industry using more traditional software approaches. That’s what ultimately drew me into that world.

The venture side was not deliberate. I was working with a really great mentor who ultimately drew me into venture capital for some period of time. That’s when I had the privilege of working at a16z, where it was incredible being there and learning from some of the most amazing company builders in the world. Some of my time there, and prior, inspired me to do the work that I'm doing today at AKASA.

Part of the core thesis is that…I often like to say that American medicine is the best in the world, but the American healthcare system is not. But it’s wild because you have people from around the world coming to the United States to get care. And yet the average healthcare experience for someone here is way worse than in many other places in the world. It’s such a weird juxtaposition.

This is something I spent a lot of time digging into at a16z. What is causing this? A lot of it comes down to how we pay for healthcare in this country. And that is what is causing this friction. I wrote a blog post about this. We are trying to do way too much in terms of different types of payment models here. It turns out there are four different types of healthcare payment models around the world. What most countries do is pick one and just do that. It’s not that one is better than the others. They all have pros and cons. Most countries just pick one and get very good at that. In the United States, we try to do all four of those models at the same time, at scale. That causes so many challenges trying to accommodate all of these things at the same time, and it has ultimately led to this extremely complex system that we call the revenue cycle in the United States.

The analogy I like to make about how we think about solving this is the analogy of self-driving cars. Which we now have today. If we could have built entirely new roads specifically for self-driving cars, we’d have had self-driving cars way sooner — but we can’t do that. We have to use the infrastructure that exists. Because that’s developed over a very long time. But with similarly advanced technology, you can still accommodate and still deliver an amazing, seamless experience.

That is playing out now in the self-driving world, and that’s what I think needs to happen in healthcare. It’s unlikely we will fully change the entire payment infrastructure in the country, but with sufficiently advanced technology, you can still deliver a seamless experience. And, with large language models, that’s now possible.

That’s the 1,000-foot view of how we think about the problem and how we’re solving it. There’s also the 100-foot view, which I’m happy to get into, and then there’s a view inches from the ground, which gets very tactical.

Jonathan:

Why don’t we unpack that? What you said really resonates with me. As an analyst, a lot of times I’m spending time with clients who maybe don’t know the healthcare industry that well, and I’m explaining why it was built like this. Why do we have those structures? If you started from scratch today, you wouldn’t build it like this. But this is what we’re stuck with. There are incumbents. And that’s how it just works, and we have to work within the framework of the system.

So maybe take it down from the high-level vision to maybe the 1,000 or 100-foot view.

Malinka:

That was the highest level of extraction. Something more tactical is what are we actually trying to do at a deep level in revenue cycle. Simply put: the health system is trying to communicate the full patient story to the payer, in as much comprehensive detail as possible. By explaining what happens to the patient in as much detail as possible, they get full credit for the care that is delivered, and the payer pays them appropriately. That’s at the heart of what we’re trying to do. We call it many things in our industry. We call it prior auth or coding or denials. Fundamentally, that’s what we’re trying to do.

In order to do that job well, you actually need to deeply understand the clinical record. Again, what you’re trying to do is fundamentally tell the clinical patient story to the payer. And if you don’t do that job well, it leads to a lot of friction. It leads to auths getting denied, care getting delayed, claims getting denied or underpaid.

And, by the way, this is now more important than ever before because, very recently, with the bill that was passed, there will be about a trillion dollars in federal cuts to Medicaid. Health system operating margins, which are already low will likely take another 20% reduction. It’s actually very important to get this right.

(Read this report on the impact of the Medicaid cuts on health systems.)

Now, the reason the friction exists today is not because of specific gap in the revenue cycle staff. We work with these people every day and they really care about the patients and doing their work well. It’s just extremely hard work.

To give you a sense of what an impatient coder does every day…an inpatient coder’s role at a health system is to look at an inpatient encounter and convert all those documents into a set of codes. The documents that represent the patient's stay in total are about 50,000 words on average. 50,000 words. That is the length of The Great Gatsby. So they have to read the equivalent of The Great Gatsby and then convert it into a set of (on average) 19 discrete codes. They have to do this at least twice an hour. It's wild. It is wild how hard this is. And we expect them to do it with perfect accuracy. And of course they can't. It's very, very hard.

But then, in the last two and a half years, we've had this amazing new capability that we've all gained through large language models. That has been a complete sea change in what is possible from a technology perspective. I'm sure you probably use some LLM in your daily use through ChatGPT or Anthropic or something.

And what you've probably seen is that these things are incredibly good at understanding complex language. And so now, for the first time, we have the ability for software to deeply understand a medical record. Previously, a medical record was basically a wall of text to software. Because, even with the best AI before, you couldn't actually understand it. Well, now you can.

That capability has unlocked so much for us in the revenue cycle. It means we can now fully tell that clinical patient story to the payer much more comprehensively than before. It is substantially reducing friction across the board, across multiple products we offer, like coding, CDI, and prior authorization. So that, at a high level, is how we think about solving this problem at hand.

Jonathan:

Maybe rewinding…your company's older than two and a half years. Where did the light bulb go off? And what made you decide that you were the right person to tackle this job with your co-founders?

Malinka:

We have been working on revenue cycle AI for multiple years, and we had been solving important problems in revenue. There were many other problems you can solve, even with the level of AI you had pre-LLM. But we always knew that the hardest problems in revenue cycle, you actually need a deep understanding. But we still built very valuable products. We were able to work with great customers, and we were able to show very good results. But we weren't touching the hardest ones.

When LLMs came out two and a half years ago, we looked at them and said, "Oh wow, now we can." We actually can.

Jonathan:

So it's a sea change in terms of what you can do from a capability perspective?

Malinka:

It is a complete sea change. It fully transformed. It supercharged us in our ability to develop a ton of value for our customers. And we were in a great place to be. We had a great point in time in the company's journey for LLMs to come about because we already had access to a lot of training data. We had a lot of distribution with health systems we could work with. We had talent. We had capital. So we were able to move extremely quickly to adopt LLMs and actually incorporate them into our products — either upgrading our products or building net new products that were just literally not possible before.

(Read more in our blog post From Chaos to Clarity.)

And the thing about this domain is it's extremely esoteric. Revenue cycle is actually a very esoteric field. We had the privilege of having years and years of understanding the revenue cycle workflow, the experience, the data sets. And we were able to figure out how to combine both the revenue cycle financial data and the clinical data. Because it’s actually nontrivial to do this — to combine them in ways that make sense and then train entirely new models to do these revenue cycle functions well. I'm happy to talk about how we do that from an AI perspective.

Jonathan:

I'd like to learn a little bit more about that because when I think about, when I talk to people and they say, well, the EMR was created for billing. You're taking it one step further to the clinical notes level, correct?

Malinka:

Yes. We do something very unusual I think in revenue cycle, in terms of how we deliver our large language models. We fine-tune per health system. We discovered this is a thing you kind of have to do for the various complex problems.

What we see happening a lot is people use sort of a single large language model that they have either developed or are calling through OpenAI or something. They'll use that for all the health system customers they work with, with minimal customization.

Jonathan:

Are these your competitors who are trying to provide an off-the-shelf solution?

Malinka:

Yes. I don't know of anyone else who does it quite the way we do. So, yes, other folks typically are using either something off the shelf or they might have developed something themselves. But they're typically using the same thing with everyone, with some sort of minimal variation. And the reality is that actually does, it can work well, right? To give credit, it can work well. It works well for the simpler problems,

But what we are doing is solving some of the hardest problems. So, inpatient coding, for example, which is one of the products we have, AI for that is extremely hard. And what we found is that when you take these more general approaches and apply them there, it just does not work. We would've preferred it to work because it is easier to scale that approach.

But what we found time and time again is the fine-tuned approach, where we take an internal model that we have, and then we additionally fine-tune it for every health system, which works substantially better. So with every health system we work with, we actually build them their own custom AI model that is trained on their data. And this makes sense because every health system has a lot of nuance. I mean, of course, when you think about it, maybe you're like, of course, this should work better. Every health system has a lot of nuance in how their providers document, how their payers track them. All of that is different, and all that nuance is actually captured in their historical claims data and their historical clinical data. It's there. If you can figure out how to scaleably unlock it.

Jonathan:

That makes sense. I mean, I think about people being on an instance of Epic, but no two instances look the same across any two providers, right?

Malinka:

Exactly. That's correct. And even the providers themselves, how they work with that instance in Epic can be different. And so basically, we fine-tune this model for them. So then we have a new base-level model for them that then powers a bunch of our products. It can power basically various agents that we've built for them. So it can power a coding copilot, a CDI copilot, a prior auth copilot, all from the same base model per health system.

And once we figured out that was going to be more effective, we just figured out ways to do that more efficiently. So now, after a lot of work, we figured out how to do that efficiently. Now we can actually do that fine tuning for health systems very fast. So it's no longer that much of an operational issue. But it took some time for us to figure out how to do all of that.

Jonathan:

Maybe diving a little deeper on that. When you sign up a new client, what does it entail to implement your solutions? And I guess maybe what's the sales cycle for getting somebody on board? I know no two instances are the same again, but generally how long does it take to get somebody across the finish line?

Malinka:

So there's a sales cycle, sales cycles, I mean, it's the typical enterprise sales cycle. There have been, they can go from six months to a year sometimes for very large groups longer, but that's typically the range in terms of how we work with folks. We first do, there's sort of two layers. There's the data we need for training for them, and that's a training dataset. Then we have integrations that we do to integrate with the EHR to be able to seamlessly retrieve and push information back into the EHR. So those are the two different things we do that historically, this has not been a very large lift for health systems as far as we can tell because we figured out ways to plug into things that they should already have up and running.

But what's actually your question here brings up another point, which is that it's actually not just the AI layer. Giving a health system an AI model is not, that's not going to do much. You actually need to have a UI layer on top of that. Okay. We're operating on both layers. We are both innovating and building our own AI models, but we are also, we've also built the actual application layer, the actual interface that a user would use to interact with the AI.

And this is something else where we believe the years and years of deep revenue cycle understanding and empathy for the revenue cycle user helps us, because we have been able to build interfaces that these revenue cycle staff really trust and want to use. And we've figured out ways for them to work with the AI so that they truly see it as a copilot. The AI is not a black box. We really have invested heavily in having the AI explain itself. It provides justifications for everything it does. It shows confidence levels. So it doesn't feel threatening to a revenue cycle staff member when they're using them.

Jonathan:

That's great. Could you walk us through an example of a client, what they were doing before, how the implementation worked with you guys, and then maybe what some of the outcomes were from a KPI perspective or savings perspective?

Malinka:

Sure. So I'll talk about two things. I'll talk about some of our coding AI products and also an auth prioritization AI product.

On the coding side, many folks today use us as an AI auditor. They do their coding work, and then our AI reviews it and helps them make sure they're not missing anything or coding things inaccurately.

To talk about some of the outcomes from things like that means that you are more accurately capturing the complexity of the care that you deliver to a patient. A lot of these health systems deliver very complex care, but sometimes they don't fully represent that in the ultimate claim that's sent out. They might just miss certain things that happen.

And so doing that correctly means meaningful improvements in both clinical quality scores for them, as well as making sure they get full credit for their care delivered through improvement in reimbursement. This can mean at some places literally tens of millions of dollars of additional correct reimbursement that they should have gotten that they are getting now, as well as substantial improvements in clinical quality scopes that go into where they rank on various lists, US News Rankings, and things like that.

And the reason they're able to do these things better is because with the AI is…remember those very long 50,000-word documents I just talked about? It's very hard for a human to do that. AI can parse through all of that in under two minutes, with basically full reading comprehension. When it goes through it, it stitches together an internal clinical picture of what happened to the patient. It does that in under two minutes in a high level of detail and then can help the human figure out what they're missing.

On the prior auth side, similarly, one of the core things you are trying to do is validate to the insurance company that the procedure you're going to do is legitimate. Say, yes, the patient needs this. I need you (the payer) to authorize this because this is the patient's clinical history. And what the payer often does is ask them questions, like: Why are you doing this here? Did you try this or this other thing?

And we've built basically a clinical research assistant here to help the staff member answer those extremely quickly. And it's been highly accurate. In one of our most recent customers that has been using this, we have the human basically rate the AI in terms of accuracy. It was a 99% accuracy. 99% of the time the human agreed with the AI on the AI's suggestion. And actually, in some cases, when we did some head-to-heads in the past, we've actually seen (in coding for example) the AI be better than a human on a bunch of cases. That's why it can serve as an auditor.

Jonathan:

That makes a lot of sense intuitively, just given the size of the data set. So you described it as a kind of Q/A Q/C process, almost where the AI is assisting the human coder. Is there a piece of the business that's automated? And I imagine you run the gamut of low acuity cases where automation probably works very well, and maybe that higher acuity is where you need that human touch as well. Is that the right way to think about it?

Malinka:

It is a good way to think about it. We are working toward fully autonomous coding and other activities for the work that we're doing. Bear in mind, though, the work that we're doing is the hardest work, right? The coding work we're doing is not really touched by anyone else in the industry. No one else is even trying to autonomously code that. It's far more complex than people that are trying to do, for example, physician coding or outpatient coding, which are very simple types of coding. We don't really do those. So we are actively tackling the hardest stuff where there is a very high threshold. The lower security stuff that we're doing is far more complex than the higher security stuff that anyone else is doing.

We are taking a very thoughtful approach to automation. Internally, we're doing a lot of R&D against that. We're seeing areas where we can actually fully automate. But there is also a social element to this, right? For very complex cases, even health systems have a higher threshold, and so we're working with them to get to that point, but that is absolutely the end state.

Jonathan:

How does it work? And this is where I get out of my comfort zone, but as I think about your solution and AI overlay our platform on the provider side, when it meets the AI solution on the payer side, is there a friction there? Can there be a vicious cycle between the two different systems?

Malinka:

The output that we produce should be substantially more compliant than what humans do. It should actually be better. The way I think about it is when you have, let's say you have a hundred human coders, each of those a hundred human coders is individually making decisions based on their own personal experience, how their day is going, their training. That output is so varied and random versus a single AI that is trained on the most up-to-date information that's always doing things correctly and is always justifying itself. That's going to be far more compliant.

Jonathan:

And then are those outputs all customized based on who the payer might be and their requirements? Is it trained on knowing that, I don't know, Aetna wants this, and United wants that?

Malinka:

It depends on the domain. There are certain domains in revenue cycle where you're not supposed to make changes based on payer. You're just supposed to do it a consistent way. So it depends on the type of product. But where it is necessary, we can do that. And other is we don't because we shouldn't.

Jonathan:

Maybe at a high level, can we talk about the revenue model? How do you guys make money and how do you scale?

Malinka:

We have two models: subscription and performance-based. And in the performance-based ones, we basically say, look, we are taking all the risk. And we say that because we're confident enough. We will show you these types of results, whether improvements in inaccuracy or things like that. And if we do that, then we will get paid these amounts.

There are other models where we have a subscription fee that is based on some unit, usually based on the size of the facility. But those are the two models that we bring to bear.

Jonathan:

Is there any preference in the marketplace between one or the other or are you seeing a move from one to the other?

Malinka:

It's interesting. We lead with the performance-based model, and it's very compelling for a lot of folks because there's zero risk for them. They don't have to make a bet that something is going to work the way that we say it'll work. We literally just say, look, if we don't hit these targets. If we don't deliver, we're putting money where our mouth is. It's also an interesting indication of confidence in the product.

So that's typically what we lead with and typically what people find compelling. And then some folks maybe want more predictability and if so, then we have the other model.

Jonathan:

How do you build the customer base over time? Is it just prospecting? Is there something that you lead in with as a tip-of-the-spear to get people on board, maybe one module or one pain point, and then land and expand?

Malinka:

The higher-level thing there is the way you grow in our industry: just by delivering great products. I mean, it's actually very simple. If you deliver a great product, your customers will basically sell it. Them talking about how happy they're with something is the best marketing, actually.

And that's what you should do as a company, build great products and the rest of it (with a little bit of help) will work out.

Now more specifically to your question, we have a module that is an extremely easy add-on for people that are doing, for example, coding work. There's a way we can have it just review what the human coders are doing without necessarily changing the upfront workflow. And so it's become a way to very easily plug in, create value, give them a taste of how powerful it is. And that's been a very good tip of the spear.

Jonathan:

Maybe going back, one of the things you mentioned was kind of building a product. If we go back to your founding and you've been around for, I guess, what, five or six years now, what have been the key milestones, I guess, in your product journey over those five or six years? And as you think maybe a couple of years down the line, if there's anything you can share from a roadmap perspective, what's the next thing?

Malinka:

The single biggest thing is the advent of large language models. And how we incorporate that is fundamentally changed everything. I mean not just for us but for everyone everywhere.

There are company milestones that are fun, like your first check. I remember the first time a customer paid us. They were just going to do a back deposit or something, and I said, "Can you send us a physical check, please?"

Jonathan:

You didn't ask for a contest one or A Price Is Right one, did you? A big check?

Malinka:

Actually, that's a good idea. We should have asked that.

It was our very first thing. It was for $50,000 or something. I found out that day that you can move a deposit of $50,000. I didn't want to give our check to the bank and lose it, so I was like, "I'm going to see if I can just take a picture of that mobile deposit so you can." So we still have the check, and we have it framed in our office. That was fun.

Some of the big product milestones are the first…big hurdle is figuring out how to integrate with any EHR effectively.

And I think this is something that people outside a lot of technologists outside of healthcare miss who are getting into healthcare, which is you can build a great product and if you haven't figured out how to integrate into the EHR, it’s DOA, right? It's not going to do well because they expect you to integrate without a ton of lift for the health system IT team and for you to be able to have the conversation with the health system IT team so you can guide them.

So, getting to the level where we can effectively and extremely efficiently build confidence for all system IT teams on: no, we know exactly what we need to do, and here's the steps you take. That was a big, big milestone. We talked about the adoption of LLMs, and then it's also really cool seeing expansions of platforms. So what we're building is really a platform approach where we have a single GenAI platform that can power multiple modules effectively.

Seeing the platform thesis work out where the thing you started with actually helps you do the second thing and that combination gives you a “one plus one equals three” is very cool. Those are some fun milestones along the way.

Then you talked about what we want to do, and it is actually just continuing on this platform approach. We found that there is a very virtuous positive cycle in the incorporation of the various products that we have. Because a lot of health systems want to solve problems in a lot of different areas, and we actually have products in many of these different areas, and most of those products talk to each other. And so we can deliver in the revenue cycle a very thoughtful package starting with the mid-cycle. Mid-cycle and coding and CDI. We can be an AI partner to solve problems across their biggest areas in one shot.

Jonathan:

Maybe just going on a tangent here, can you talk about the team a little bit? How did you and your co-founders kind of coalesce and how did you always want to be a CEO?

Malinka:

I knew I wanted to solve this problem for quite some time. I'm not sure why, but it sort of naturally was like, yes, I'll drive. The co-founders were folks that were in my network as either friends or friends of friends who were talented. People that people spoke highly of. And they'd all run into these healthcare problems through different paths of life and that's how that came together.

And it's fun to think right, at some point you were just a couple of people working in a WeWork equivalent. Then just growing from there to the team we have now has been, it's been a fun, a really fun journey.

Jonathan:

Has it been interesting? I think about your background from the venture side, where this is very simplistic, but you're writing the check and helping provide a network and give ideas. How has that transition from that role to this role been? And then whose leadership style do you model your own after?

Malinka:

What I'd say is that as a founder, you feel everything so much more. The highs are higher, but the lows are lower. You can literally have a single day where you start the day feeling amazing because you closed some big deal and then end the day feeling terribly because you lost a recruiting candidate or something. And you feel all of this so much more deeply.

But I love it because you feel it more deeply. You feel so much more ownership of what you're doing and you are so much closer to the actual problem you're trying to solve, right? You are actively solving the problem. You're living it, right versus you are observing other people solve, which is also interesting.

And with a16z in particular, it's been fun working on both sides. I was there on the investor side, and now I have the privilege of continuing to work with them on the other side. They have been investors in us across a few rounds and they are a great partner to work with. But I will say, while that was an amazing experience, I truly love what I'm doing now. I would not be doing anything else.

Jonathan:

You've done a couple rounds, and I believe it's been a couple of years since you've raised any capital. Thinking about that last round, what did you use the capital for and what are the requirements for the business going forward?

Malinka:

We have continued to use it for R&D. We used a lot of that capital to incorporate large language models. Because the work that we're doing at LLMs is not cheap either, it requires access to expensive and large sets of GPUs. So that was a lot of it. Obviously, there are also sales and marketing types of things.

It's a fairly efficient business. We haven't thought about capital for a while, just because we continue to be fairly well capitalized, and so my main focus right now continues to be continuing R&D. There's still a lot more we can build because it's still a fairly green field in these very complex problems that no one else has figured out how to solve. I feel like we have a fairly meaningful head start.

It's actually interesting to think about this way. If you were a company that started just as LLMs came out, it would actually be very hard to adopt them meaningfully because you need actually some baking time as a company to even understand the types of problems that exist in this domain. To get access to the data sets you need. Need to get access to customers to test these things with and do all of that. We were able to do move extremely fast to do all these things. And there were a lot of companies, I think, in that era that didn't actually make the changes rapidly.

And so something I'm really proud of our team for is their ability to really lean in and do that quickly and really lean into it, and that has led to some remarkable outcomes.

Jonathan:

When you say R&D on new products, is that human capital that's required or is it buying new data sets or expanding the aperture on the models? Can you just walk me through that process a little bit?

Malinka:

It is all of the above. It's human capital. It's all of these different things. But, yes, it's basically how do we get more accurate and faster in the AI scale of data that we have to process for a single task is extremely large. It's larger than most tasks that people are doing at LLMs today. If anyone's interested, we are recruiting for ai. I have to say it.

Jonathan:

We can make this a recruiting commercial. That's fine.

Malinka:

Yes. If anyone is listening and is interested in the types of problems we’re solving, we actually just published a blog post about a week ago about how we are using cutting-edge LLM techniques to solve AI for understanding very, very long records. It's actually some very interesting problems for our research scientists to solve.

Jonathan:

What's an example of one of those longitudinal records? Is it somebody who's had multiple instances of cancer or is it just somebody who's been actually in the same system, which is rare, I think, but for maybe 20, 30 years.

Malinka:

It's actually typically long inpatient stays. So if you have a 10- 20-day stay. That is a very long set of procedures and a lot of different things have happened to that patient. A very complex story that creates a ton of documentation which you have to go make sense of.

Jonathan:

I know I'm thinking about every aspirin and bandage and infusion, those sorts of things. And each one of those creates a digital record.

Malinka:

Correct. There's a lot, actually. I mean there's records that we've seen which are — I mean I said 50,000 was the average — like 300,000 or 400,000 words. It's wild how long these things can be. And even today, most off-the-shelf things literally cannot even incorporate them. And so we have to figure out ways to do that, like scalably, and we have.

Jonathan:

So you're on the front lines of this really sea change in the use of AI. What gets you excited that's maybe outside of the four walls of your company? When you look out at the landscape and see what's being done. Are there any technologies or problems that are being tackled that you really think are going to be fundamental changes to the system?

Malinka:

There's interesting work happening at the payer side. We don't work with payers, but it's very interesting seeing people incorporate approaches to…I'm just speaking from personal experience, but I would love the more updated provider directory, which is very annoying.

There are very interesting price transparency products being built that help you more accurately predict what someone's supposed to pay for something. That I think is very interesting. The companies that are doing ambient listening are interesting. There are other companies that are installing cameras in operating rooms and things to be able to use those to provide guidance or track what's happening through computer vision. Those are all really interesting things that are happening.

Jonathan:

When you talk to your customers, because those are all great examples, whether it's ambient or automation, or I'm sure there's a host of others that we didn't touch on. Are your customers inundated with these choices? And how do you think, and I know I'm putting you in their shoes, but how do you think they think about or how do they tell you they think about ranking which solutions to implement now — maybe which are on the back burner for a couple years? Do you have a good sense of their thought process? And I know everybody's different.

Malinka:

I mean they for sure are getting inundated. That is 100% the case. And I don't envy them. It is hard. It is hard for them to differentiate because they shouldn't have to be experts in AI to tell the difference between things.

The way we think about it is that so many health systems will have an AI expert to help the business owner make decisions. And we love it when that happens because, unfortunately, in our world, there are a lot of people calling everything AI when it's really not right. It really isn't. And it's very hard for a business owner to sometimes tell a difference. And so when there is a true person who understands AI, we find that refreshing. We can actually go extremely deep. And our recommendation is if you have that person, actually have them go a few layers deep and see if the vendor can do it. If they can't do more than one layer, it's a problem.

But then beyond that, it's actually just the business results. That's fundamentally what matters. Just going back to the comment I made earlier, the most effective way to do that is to just deliver great results to your existing customers and then have them connect. That truly is the best thing, right? You cannot fake that, right? You actually have to be delivering good results. If someone is interested, just tell 'em to go talk to these people and that is the most effective thing. That is what I recommend to folks. Just talk to existing customers. And if you want to do more, actually go into the AI if you aren’t convinced it's real. But doing some combination of those things should weed out a lot of stuff.

Jonathan:

Where do you think AKASA is going to be five years from now? And I guess when you think about some of the milestones in the future, what are they? I mean I can imagine there's some financial milestones or product milestones. But as you think about the future, what are you really excited about for the company?

Malinka:

I truly do envision the world where we don't just have the best medicine in the world, but also the best healthcare system in the world. We have to correct that. What that means is communicating the clinical patient story as quickly and as comprehensively as possible to the payer, in as close to real time as possible. Getting to that will solve a lot of this friction that we have today.

And, historically, we just did not have the tools. We just did not have the tools for this. Now we do. So I think it's just a matter of time and I think we will be one of the core folks that are a partner to any health system that wants to get to that point. That has close to real-time communication between the provider and the payer, so that there's extremely information-rich streams going between them. That is, I think what will happen.

Again, it's just a matter of time. And what it ultimately means is there is much less confusion for the patient and even the provider, as they go through their healthcare journey.

That is, I think where we're headed and what we're doing. We should be a major part of that.

Jonathan:

Thinking about that journey for your customers, it made me think about the volatility we're seeing from a policy perspective. Is there anything that you guys are paying a lot of attention to from a regulatory standpoint or a policy standpoint? And I don't know, just spitballing here, is there something that Washington can do to help facilitate some of the work you're doing?

Malinka:

There is. There's what I would like, and then there's what's realistic? One thing that happened recently — that for sure is going to have an impact — is the massive Medicaid reduction. That is just going to hit margin. For example, the prior auth side, it's currently very opaque. And I understand why payers do this. They want to make sure a procedure is being done because it's actually necessary and not frivolous. And I get it. It doesn't feel great obviously on the patient side to feel like you have to get permission. But we can use AI to help with that.

One thing that would help a lot is if the payers were a bit more transparent about what those policies were. It's strangely hard to actually get access to “what do you as a payer think is clinically necessary”. If that could be a bit more transparent, it would help along the entire chain.

Currently, you have to wait until the point in time when you're trying to do something, when they'll tell you, which is not great. Now our product solves for that because at that point in time, it makes it super easy to do. But if you could get access to that earlier, you could move this upstream. So that's one example. Yeah.

Jonathan:

What are some of the other ones that are maybe top of mind or are there any rules or regulations coming out of CMS or the like or CMMI? I think that might catalyze this.

Malinka:

There has been actually activity on prior auth for a little bit. Actually, another thing also in the prior auth side is making it easier to communicate information between providers and payers. So weirdly, there isn't a very effective way to do this communication between a provider and payer. So claims are actually relatively efficient. But for other things that prior auth, and I guess it makes sense because prior auth is a newer thing.

First, the core claim payment had to happen, and that's good. Later, prior auth became a thing, and I don't think people really figured out how to make that super efficient. But today, it still works a lot through fax. Fax is a very common mode of exchanging information for prior auth.

One of the wildest things for a health tech person is you could be in a meeting where you're talking about some very sophisticated AI to solve something and then your next meeting is, okay, so what is a fax integration? It's equally important, actually.

But anyway, it is a very common information exchange. Another one is that people call a lot still. Literally, people will do stuff over a phone call, which is also crazy. And then stuff on a website. These are common modes of transmission, none of which is very efficient. It would be great if we could switch to an API-based approach. If there were just APIs that payers exposed, saying here is how to communicate with us. That's how many other industries work. Just call this API and exchange information through it.

Jonathan:

Or if it was mandated even, right?

Malinka:

Yes, exactly. If that were mandated to expose APIs, that would be great. It would massively reduce the innovation debt. That same goes for denials. Actually, even exchanging information and denials is a very…it's most of those other methods I described: fax, call, website. If every single mode of communication could get API-ified (if that's a word), that would be amazing.

Jonathan:

Well, we'll have to wait for that one. I'm not holding my breath that we're going to do away with the fax machine, but I'm hopeful.

Malinka, we're coming to the end of our time and one of the ways I like to wrap these conversations up is just to maybe dive a little bit deeper on a personal front. And I always ask everybody if there's a life lesson, either from your personal life or something you've learned in your professional experience that really drives your day-to-day. Does anything come to mind?

Malinka:

Yes. There are a couple things. One is an amazing book that everyone should read is The Hard Thing About Hard Things, by Ben Horowitz, one of the co-founders at a16z. It's just a great book for any founder to read because it's very real and raw. And it will help. Especially when you're going through a particularly tough time or trying to figure out some hard decisions.

The other things I figured out is what are the things I look for most in candidates just through experience. Now is there's a lot of generic stuff, like it has to be a hard worker and no assholes and stuff like that. But at a higher level, the three things I think about the most now are: conviction, taste, and grit.

Conviction being the first one, because everyone to be effective has to make decisions. You have to make a lot of decisions, and you need people to be able to get conviction on a decision quickly and actually make it. Because a lot of people get stuck in analysis paralysis.

The taste one comes in because, yes, we want you to make decisions quickly. But also generally make good decisions. You have to make good decisions. You do not have to be perfect. Part of the tradeoff in having high conviction, fast decisions is that you will make some bad calls. On average, you should be doing making good decisions.

And then finally, the grit thing I'm realizing is actually one of the most important things. Because let's say you do thing one and two, the reality is even for decisions that are objectively good decisions to have made, you will end up with outcomes that were suboptimal. A bunch of the time, you could have done the right thing, but the actual outcome may be bad. And you need to have the grit to work through that and get into the mental state and still repeat the cycle. And that last one I think is, especially as an entrepreneur, is probably the most important thing off the set. But those have sort of crystallized as things that I think about a lot.

Jonathan:

That's a great book, and I can foresee maybe some t-shirts for your team with the CTG on it, or something of the like. You'll have to come up with a moniker. So Malinka, thank you so much for your time today. You really shared a lot. We covered a lot of ground and shared a lot. I appreciate it. And I guess we'll kind of wrap up here.

So thanks again, and that's Malinka Walaliyadde, CEO and co-founder of AKASA. Thank you so much for joining us on our latest episode of Bloomberg Intelligence Vanguards of Healthcare podcast. Please make sure to click the follow button on your favorite podcast app or site so you never miss a discussion with the leaders in healthcare innovation. I'm Jonathan Palmer. Until next time, take care.

Akasa logo
Akasa logo
AKASA

Sep 9, 2025

AKASA is the preeminent provider of generative AI solutions for the healthcare revenue cycle.