The False Promises of AI in the Patient Processing Problem

AI in the infusion world

Share This Post

 

“Your clinical staff is going to be like supercharged iPads on wheels. Everything will be so automated end-to-end that you’ll actually never have to see another human again. You’ll be drowning in profits thanks to our latest snake oi– err– I mean, AI-enabled solutions!”

You’ve probably been there: stuck in front of some over-caffeinated sales rep, hands flailing like he’s signaling a pilot for takeoff, talking about how AI is going to revolutionize your infusion clinic. He’s throwing buzzwords at you like confetti, and you’re just trying to figure out if this is going to help your staff or just add another layer of chaos. Spoiler alert: probably the latter. 

You plot your escape and start walking backwards to another vendor’s booth and wouldn’t you know it— more of the same. It’s an endless stream of “AI” merchants. Anything to inject those two little letters…A…I. 

In this blog, I am NOT going to hand-wave around some esoteric future promises of “AI”. I am, however, going to talk both about: the practical operational breakdowns that affect AICs everyday and go into (possibly too much) detail about the machine learning techniques best positioned to actually fix them. 

If I do my job half-decently, you’ll have a sense of how other centers are overcoming the same issues you’re dealing with and the tech-stack and organizational rhythm driving those operational wins. 

The Pre-Visit Patient Processing Problem

When most clinics see mounting denials and delayed DSOs, they justifiably look to their Billing or RCM teams. This sounds right at first glance. But in practice that obsession with the symptoms (reimbursement) ignores the upstream root causes really driving the pain. It turns out so much of what we’d define as ‘post-visit’ or ‘billing’ issues stem from mistakes that happened well *before* the patient walks through your door. The problem is with the redundant, but error-prone ‘pre-visit’ work. 

With providers dealing with a flood of digital faxes, endless rounds of benefits investigations, and more back-and-forth with insurance providers than anyone deserves. Every missing piece of information is another delay for your patients and referring providers, another chance for a claim request to get denied, and another place for your staff’s jobs to be harder than they need to be.

It’s the root of so many problems and it doesn’t show signs of slowing. But take one I just mentioned: benefits investigations– 67% of healthcare providers say their benefits investigation workload has only gotten worse over the past year. And this year, 20% more providers than before have said flat-out that it’s having a “highly negative” impact on patient care. 

That’s pre-visit work that is redundant, error prone, and horribly time-consuming. It feels so silly to be doing that ‘the promise of AI’ will surely solve it. Well, is ‘AI’ really the solution there? Or is it about having all the right information, so you can automate the benefits investigation (a piece that doesn’t require any ‘AI’ ) without doing so much manual work. 

Talk About Models, Not AI

Through the somewhat nefarious and very intentional misdirection of the powers that be, the general consensus is that if for the rest of this discussion I said AI or ML it wouldn’t make a difference. They’re close enough and refer to the same thing, right? But the truth is that if I put out a job post for our research team at Tennr with the job title AI engineer — we would have 0 qualified applicants for that title.

That’s because in practice the people who work on these solutions are building and researching  models. And it’s important to talk about those machine learning models and their underlying operations. When we ask what model is being used here, how a model is designed, why it’s designed that way, and what actions are being taken with it – it may be a bit more complex but you force yourself to have a meaningful conversation about what the model was trained to do, how it improves and where its pitfalls are. 

Maybe it’s pedantic, and definitely leads to some eye rolls but when we made this a requirement at Tennr, it was a forcing function that made sure even our least technical teammates had to really understand how models were built, how the technology improved, and ultimately how problems are really solved. 

A Great Way to Build Terrible Models For Infusion Centers 

“More data, more data — that’s all we need.”

It wasn’t long ago that the companies on the forefront of ML research were purely focused on the size of their dataset, the horsepower of their compute and much less on the scoped-down quality. The approach, that with enough semantic data and enough compute you could create a text-based completion model that truly felt ‘intelligent’ actually proved correct. And with a whole lot of reinforcement from real humans editing and tweaking answers, that ‘intelligence’ started to feel real and conversational. 

It was a product in the pursuit of being this super smart, all-knowing, generalized model built to answer basically anything. What was their main bank of data? The home of ‘Gangnam Style’ and literally every conspiracy theory on earth — the internet.

Jokes aside, it was actually a very logical approach if you’re trying to build chatbots that really felt smart and good at answering anything you might dig up on the internet. Basically it could answer seemingly anything so people began thinking maybe ‘it’ could ‘do’ anything. And you can see how this slippery slope led you to overly-caffeinated conference guy that I mentioned earlier. 

“Well Trey, who cares? What does this have to do with healthcare, my faxes, my patient intake, benefits investigations and the AI prophets I keep running into?”

Well, if you were trying to build models like the one I just talked about (ChatGPT, Claude Opus, etc), guess what kind of data you definitely would want to avoid using: data that’s unstructured, overly complex, messy, and in categories of conversation you would want your chatbot avoiding. And you certainly wouldn’t want data that contains sensitive personalized information that is hard to obtain and regulated… hmm…ringing any bells? Medical records— for all of the reasons I listed, but especially because they are so tangential to the mission of creating perfect chatbots,  are so often ignored in these very popular models and in a lot of the mainstream ML applications. And it leads to performance issues that aren’t very hard to notice. 

So What Does it Mean to Develop a Model For Healthcare?

At the risk of being self-indulgent and soapbox-y, I’m going to tell you how we approached it at Tennr. First, we wanted to build a smaller model, but one that could be applied to read, interpret, and act on medical documents. In a surprise to absolutely nobody, this involved getting our hands on labeled medical records: medical data, handwritten notes, messy referral packets — all of that lovely unstructured and complex data that large generalized models leave out. Millions upon millions of de-identified medical documents. And I’m not just saying this for the fun of it — in the case of medical records it turned out if you didn’t get your own data and build your own model, the results just weren’t that great.

Take checkboxes for instance. Even the best checkbox-readers benchmarked for us at only around 64% on a diverse set of medical data. (There are a bunch of reasons for this we can’t go into, and the specific type of checkboxes commonly seen on intake forms etc). But that was a great example where we were fortunately able to say ‘we have this unique checkbox problem and there’s basically no dataset out there we can find with the structure we need, so let’s go build it ourselves’. And with a lot of dataset orchestration we were able to get that particular model to greater than 98% accuracy. That’s a huge difference in automating work that starts with a document. 

Unfortunately, that sort of ‘discrete data extraction’ turns out to be the easy part. Reading documents is one thing, but how do you know you’re reading the right height for the right patient when you have a document with 10 patients on it. Or more commonly, how do you avoid duplicating effort or EHR entries when you see 10 documents all for one patient. 

When you know you’re ‘READING’ the document accurately, how do you reason through the possibility that WHAT you’re reading doesn’t have serious clinical implications. Your best coordinator doesn’t just extract data all day, they also interpret. They know what code and insurance combo is going to lead to trouble. They know what’s missing, and they know how to ‘qualify’ patients to make sure EVERYTHING is there. We call this ‘Qualifications’. And when we say we’re ‘reading the fax or the endless stream of PDFs’ it’s not about simply ‘pulling’ the information off the page, it’s reasoning through it and making it dead simple to make sure that everything that comes next is done with solid clinical accuracy. 

At this point, you’ve dealt with the endless edge-cases of medical documentation: from checkboxes to multi-patient documents. You’ve pulled the right information for the right patient and you’ve reasoned through it successfully. From there, you’ve earned the right to automate what can be automated (without any “AI”)  on things like benefit investigations, referring provider status updates and the like. 

I dizzied myself writing all this out, so I can only imagine the mental strain of having to read it.

But ultimately, if you do it right you can compress weeks of work into minutes and make sure an incredible patient experience starts from the moment a document shows up with their name on it. 

 

Guest Authors:

Trey Holterman, CEO and Co-Founder, Tennr

Trey studied Computer Science with a focus on Machine Learning at Stanford University. At the intersection of technology and healthcare, Trey has spent his career focused on applying his research at HealthIQ and Strava. Currently Trey is the CEO and co-founder of Tennr, a first-of-its-kind referral process automation platform built on the most robust document reasoning model in the industry.


WeInfuse is dedicated to keeping infusion simplified by streamlining workflows for home infusion, specialty pharmacies, and infusion centers. WeInfuse software enables infusion operators to operate efficiently, decrease burnout, maximize profits, and improve clinical outcomes.

For more infusion insights, tune into the WeInfuse podcast, download our magazine, and subscribe to our blog.

Subscribe To Our Newsletter

Get updates and learn from the best

WeInfuse needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.

More To Explore

Is WeInfuse right for you?

Find out how we can help.

Don`t copy text!