akasa
AKASA
April 19, 2023

The Gist

Frances Haugen, a Facebook product manager, testified about issues surrounding moderation and failures on Facebook’s part. Artificial intelligence was supposed to take on the bulk of the moderation burden, yet many problematic posts got through. Was the failure on AI’s part, or because of a lack of human moderation and intellect?

The Jetsons imagined a world of flying cars, robot housekeepers, and sheer convenience. While Rosey the Robot isn’t cleaning your house yet, tech leaders like Elon Musk and Mark Zuckerberg, and companies like OpenAI and Google, continue to envision and promise an AI-powered world. A world of rapid advancement where AI moderates misinformation and hate on the internet, and cars autonomously drive people around. 

But are we there yet?

Frances Haugen’s testimony and more recent headlines on moderation failure showcase a different world than the one envisioned by many tech leaders. 

Where Social Media Went Wrong With AI

Haugen, a Facebook product manager, testified before a Senate subcommittee about the company’s failure to police hate and combat misinformation. The situation is noteworthy because Mark Zuckerberg stated that the company had people and algorithms in place to protect against misinformation and hate-based content as far back as 2019. Yet, despite these algorithms, the company only caught 3-5% of hate-related content.

This is indicative of a significant problem in the tech industry in general. There’s often too much focus on the promised vision or end state of what’s possible with AI — without the willingness to invest the time, money, and, yes, human capital required to make that vision a reality. 

Rarely can any company short-cut its way to market success. AI is no different. This focus neglects what makes AI actually worthwhile: plugging in useful data and teaching it how to deal with outliers.

From self-driving cars to social media, the big shiny promises of AI have fallen short in large part to grandiose promises made by technology marketers. And what this has shown is that responsible tech leaders need to ground their company’s promises in reality and that humans must be involved in the process if we want AI to perform at peak productivity and be helpful. 

You need a human in the loop.

 

What Social Media Can Learn from Healthcare

We’ve seen how capable social media is of creating incredible societal damage. However, you’d be hard-pressed to find higher stakes than those in healthcare, as mistakes in healthcare can cost lives or result in financial ruin (medical bills are the number one cause of bankruptcy in the United States). 

But healthcare is still actively embracing AI in a practical and safe way, with a few essential practices in mind.

Keep humans in the loop

The human-in-the-loop approach isn’t new. It was first pioneered by engineering and software companies to make machine learning more efficient through training. The idea is that AI and humans work in tandem, with humans assisting the AI to get off the ground and overcome hurdles along the way. 

If you’re relying on AI to police your business, you need humans to follow up on the reports filed by your AI. Healthcare uses humans in the loop to successfully embrace forward-thinking technologies, including AI. 

One increasingly common use of AI in healthcare is scanning for breast cancer. While this has recently made the headlines for all the wrong reasons, there’s hope. Harvard Medical School teamed up with Beth Israel Deaconess Medical Center to use AI and humans in the loop to improve breast cancer screening. When using AI on its own to scan pathology images, the AI had an accuracy rate of 92%. Humans had a success rate of 96%, sans AI. Once experienced pathologists trained the AI to identify cancer cells in positive slides, humans and AI working together had a success rate of 99.5% — almost perfect because humans played a more prominent role.

Another area where this human-in-the-loop approach is used is in the healthcare revenue cycle — an area notoriously complex and prone to human error. This is the technology on which we built AKASA.

Automating these back-office processes means that medical billing can be faster, more accurate, and more efficient. Using technology to handle mundane, time-consuming tasks means healthcare provider staff can focus on more revenue-generating and patient-facing tasks. By leveraging AI and a human-in-the-loop approach, hospitals and healthcare systems can truly automate complex, dynamic workflows.

It’s not uncommon for a healthcare organization to add a popup in their electronic health record (EHR) system or change the color or placement of a field on a landing page. For AI, these are all-new environments. When this happens, a true AI-based technology gets an alert, triages it to a human, and the human solves the problem. From there on out, the AI knows how to deal with that issue, sans human. It can truly learn and progress, getting smarter and more efficient with every task.

Human experts are vital for properly training the AI and ensuring that it can handle the outliers that are an inevitable part of any industry — social media, healthcare, and more. Humans informing AI on decision-making can provide rapid resolution of unusual issues so that next time the AI can adapt and handle the problem on its own.

 

~ Varun Ganapathi, Chief Technology Officer and Co-founder at AKASA

For social media, the above process might look like: AI picks up on a possible hateful image or piece of misinformation and flags it with a specialist. Said specialist tells the AI this is what it’s looking for. The AI learns, the human steps back, and life carries on (without hate and misinformation).

Cut the human out of that process, and the AI is left guessing — which is where you get the 3–5% capture rate for Facebook’s algorithm mentioned above. 

Watch Ganapathi’s TechCrunch talk on how machine learning and human-in-the-loop approaches are expanding the capabilities of automation.

 

AI is never finished

There’s no “complete” state for true AI. Humans are, or should be, constantly learning. AI is no different.

It’s a common misconception that AI can be deployed and left unsupervised to do its work, with little consideration for our ever-shifting and evolving environments. Would a manager do this with a human worker? No. Why should AI be any different?

For AI to be the most useful and productive, business leaders and technologists should be training the AI on what needs to be done. Without guidance, AI can’t be expected to succeed and achieve optimal productivity. 

The rules of healthcare are always changing, so AI changes with it. If there’s a new regulation to comply with, an AI platform needs to learn it. If there’s a better way to do a process, you improve it. Simply automating tasks isn’t enough; we need to ensure the tasks are done as well as possible.

For social media companies, this means keeping your civil integrity team around for the entirety of the company’s existence and ensuring your AI is learning as much as possible from that team. Just like healthcare, misinformation is constantly changing, and AI needs human help to adapt.

 

Data labeling is a must

AI models need constant training on clean, relevant data in order to function. Collecting, categorizing, and cleaning data is time-consuming, mundane, and tedious. 

It’s unlikely you’ll find many engineers thrilled about data labeling. It’s not sexy or flashy compared to creating complex AI infrastructure. But it’s foundational — especially in the case of AI. Prioritize data labeling to ensure your AI is not only working but also working as effectively as it can. 

With proper data labeling, healthcare organizations can automate the work of numerous revenue cycle specialists and reduce billing errors. They can also potentially train AI to work with doctors on improving breast cancer detection. 

With correct data labeling, social media organizations can eclipse the 3–5% capture rate for misinformation and hate, and make the impact they claim they want to make. Without proper data labeling, it’s simply not possible for AI to perform in the manner social media companies have promised to the public.

Social media doesn’t face the same regulations as healthcare, and whether or not it should is a discussion for another article. But, if healthcare — an industry sometimes characterized as slow to change or innovate — can embrace AI while staying compliant with HIPAA, HITRUST, and countless federal and state-level regulations, social media can too.

Listen to this podcast to learn what the future holds for AI in healthcare.

 

Remember: The Intelligence Is Artificial

Technology has advanced in leaps and bounds since the era of the Jetsons. All of our cars aren’t flying yet, and they’re not quite driving themselves autonomously yet, either. But AI has the potential to ease the burden of overworked teams, provide relief for short-staffed hospitals, allow for more efficient medical treatments and drug developments, and, yes, possibly even moderate social media. (Even the government is getting in on the AI-powered social media action.)

Where should companies focus? Structuring cleaner data, deploying AI in areas where it can excel, and letting humans verify and oversee the AI, while also  having them do the work that AI can’t. It’s vital for companies to remember that artificial intelligence is still artificial — there’s still no replacement for human judgment.

AI still holds incredible promise and should continue to inspire our dreams for the future. Today your Roomba may need the occasional nudge to get out of the corner. If we properly involve people in the development process, then one day you just might trade that Roomba in for your very own Rosey the Robot.

You may also like

Blog auto
Feb 20, 2024

How the AKASA Engineering Team Created an Automation Solution for Database Migrations

AKASA builds products and tools to improve the various components of revenue cycle management (medical billing) for hospital systems....

Blog Resource
May 1, 2023

ChatGPT and Healthcare: Exciting Potential That Needs To Be Channeled

Recently I heard that as a fun exercise, the security officer at one of our healthcare clients tried asking...

Blog Resource
Jun 12, 2023

Overcoming the Top 3 Challenges Holding Back Healthcare Innovation

Healthcare is notoriously slow at adapting and incorporating new technologies into day-to-day operations. Healthcare lags behind as one of...

Blog Resource
Jun 12, 2023

7 IT Mistakes You’re Making With Your RCM Automation Partner

The right revenue cycle management (RCM) automation is capable of helping healthcare organizations overcome a litany of issues —...

Blog Resource
Jun 12, 2023

Questions Healthcare IT Teams Should Ask About Revenue Cycle Automation

RCM leaders at your organization are discussing automation. Period. The healthcare revenue cycle is fighting non-stop battles. Staffing challenges...

Blog Resource
Jul 23, 2024

9 Healthcare Technology Trends To Watch

Keeping track of the rapid changes in healthcare technology is no small task. The industry has seen numerous healthcare...

Blog Resource
Nov 30, 2022

The Gradient Podcast: An Interview on AI and Healthcare With AKASA CTO and Co-Founder Varun Ganapathi

On a recent episode of the Gradient Podcast, host Daniel Bashir sat down with AKASA CTO and co-founder, Varun...

Blog Machine Learning in Medicine: Using AI to Predict Optimal Treatments Hero Image
Sep 1, 2022

Machine Learning in Medicine: Using AI to Predict Optimal Treatments

At AKASA, we’re always thinking about how we can use machine learning (ML) and artificial intelligence (AI) to better...

Find out how AKASA's GenAI-driven revenue cycle solutions can help you.