ipopba/istock via Getty Images

Agentic AI streamlines PromptCare's operations

As organizations deploy agentic AI, Phil Merrel, CIO of PromptCare, encourages IT leaders to focus on the end-to-end process and use the 80/20 rule for agility.

PromptCare, a midsize healthcare provider based in New Providence, NJ, is turning to agentic AI to improve operational efficiency and transform how it serves patients.

CIO Phil Merrel deployed AI to automate repetitive tasks and accelerate patient onboarding at PromptCare. Early on, the rollout faced cultural resistance and required multiple iterations to get workflows ready for deployment. However, those experiences taught Merrel best practices for agentic AI rollout: focus on process first and apply the 80/20 rule. In other words, CIOs should start by clearly mapping the processes they plan to automate, understand how humans and AI interact at each step, get workflows roughly 80% complete, deploy, and then optimize the remaining 20% through ongoing iteration and stakeholder feedback.

Following these principles, PromptCare achieved faster onboarding, improved workflow efficiency and measurable gains across end-to-end patient service processes -- all while maintaining human oversight and quality standards.

In the following interview, Merrel shares how PromptCare implemented agentic AI, the challenges that kept him up at night and how he measured AI's effect on efficiency and revenue.

Editor's note: The following transcript was edited for length and clarity.

How are you using agentic AI at PromptCare?

Phil Merrel: Intelligent automation is the journey I've put PromptCare on, and agentic AI is a main component. We've been focusing on patient services, using agentic processes to improve efficiency, optimize workflows and shorten turnaround times.

For example, in one clinical product line, the first four steps of patient onboarding -- from receiving the referral, transforming the documents, ingesting them into our core clinical and onboarding systems, to qualifying the patient to ensure they meet the line of therapy -- are all handled through an agentic process.

What's the distinction between agentic AI and intelligent automation?

Merrel: When we talk about agentic AI, we're referring to the goal and outcome we're using AI for. It's not just a one-time task. It's more of a process-oriented automation.

What I call intelligent automation sometimes involves various types of tools, not just AI. When I talk about the tooling, sometimes there's robotic process automation (RPA), which can be involved in agentic AI without issue. But we may also use process automation in the sense of your traditional business process management tooling. It's still automation, but it's not AI -- it's just a defined process. We have a solution that monitors that, and the agentic AI process will call and work through that tooling, as well as RPA and AI agents.

What goals did you hope to achieve by introducing agentic AI?

Merrel: The main objective was to allow the organization to scale and become more efficient with less. For example, if a referral used to take 15 days from the time it was received to when the patient's appointment was scheduled, could we get that down to two days?

If everything is in order, the answer is yes -- but what if it's not? We needed a set of automations that could quickly identify issues, respond immediately and keep the process moving. Ultimately, it's about speed, efficiency, quality and scalability.

Healthcare is a very regulated industry. How did you ensure that AI agents were making safe, compliant decisions?

Merrel: We're not using any agent or automation process for clinical decision-making. An AI agent might read a medical record to determine whether a patient meets the qualifications for a COPD therapy. It goes through the record, pulls out key instances and references why -- based on what we've provided -- the patient meets the qualifications.

That output is printed and presented back to our staff -- the human-in-the-loop concept -- and the human makes the final determination. The automation accelerates the process but never makes the final decision. Additionally, everything we do operates in an encrypted, controlled manner.

What challenges did you encounter during deployment?

The biggest hurdle was the lack of clear communication about how AI and humans coexist.

Merrel: One of the things we ran into was culture. People wonder, 'Are you trying to replace me? What's my safety here?' People have their own ways of doing things, and when you introduce automation, there's concern about what that means for them.

So, CIOs must help people understand the human-in-the-loop concept and clearly explain how AI operates within the human ecosystem. If the goal is to eliminate jobs, you're going to have an uphill climb getting that message out. But for us, the biggest hurdle was the lack of clear communication about how AI and humans coexist.

Another key takeaway is to go to process first. You must understand how humans and AI exist in the process, and ensure the right stakeholders are involved in defining it. Agentic AI is about the process and the outcome, not just individual tasks.

Get to 80%, deploy and then work with stakeholders to close the remaining 20%.

The other piece is that the 80/20 rule is king. We spent too much time refining processes instead of deploying them. In one case, something we thought would take a month ended up taking eight because we kept making tweaks.

At some point, you have to say, 'This is 80% of what we need. Let's get it out, see where we fall down and go from there.' At the same time, minimally viable must be good enough. If it's only 70% there, people won't use it -- they'll find workarounds. So, get to 80%, deploy and then work with stakeholders to close the remaining 20%.

How did your team measure the ROI of agentic AI?

Merrel: Early on, we focused on finding small wins. We'd identify a pain point, build something small and measure it. For example, if a task took a human 10 minutes and the agent could do it in two, that's eight minutes of efficiency gained. We started with that -- the human baseline versus the agent performance -- and used the delta to estimate the value.

That works for early wins, but as we matured, we moved toward qualitative and quantitative ROI. Qualitative is a soft benefit -- something that doesn't directly affect the budget. You know it's valuable, but you can't tie it to a specific financial line. Quantitative is when you can directly tie the outcome to the budget, like deploying a solution and seeing revenue increase as a result.

From there, we started looking at the process as a whole, not just individual tasks. For example, in-patient onboarding, the process begins when a referral comes in -- whether by fax, email or SFTP -- and continues through document preparation, ingestion, verification and presentation to a human for validation, before starting benefit verification. If those steps used to take us 15 days, and now only take four after deployment, we have an ROI of 11 days. That's the first level of ROI.

But then you ask where the gain comes from. In one therapy line, those 11 days meant we could process four times as many patients, which translated into increased revenue. We were able to show census growth tied to a budgeted number, enabling us to point directly to the effect.

So, we've matured from focusing on individual agent efficiency to measuring end-to-end process outcomes and tying them to both qualitative and quantitative ROI.

How did you balance automation with human-in-the-loop decision-making for critical tasks?

Merrel: If you implement agentic AI properly, some tasks that people do will be automated. We have several people whose jobs have been automated. The human-in-the-loop piece ensures that the automation performs its work and provides a confidence level for each process run. We set a threshold -- for example, 95% -- and if the workflow meets that threshold, no human intervention is needed.

The humans who used to perform the task every day, they now deal with those exceptions. They look into why it didn't meet the 95% or better threshold. And then, we want them to spot-check where the agentic workflow said, 'No, I did get it right.' So, we have a two-point threshold essentially.

In practice, we may set a higher first threshold, say 98%, where humans don't need to intervene, and a lower threshold for intervention. This two-level system balances efficiency with oversight.

Humans became critical stakeholders in developing these workflows. They were engaged in creation, testing and implementation, which helped them understand how the agentic workflows operate and feel part of the process. On the other hand, they continue to monitor and test workflows to ensure patient care, outcomes and services remain safe.

What kept you up at night throughout this deployment?

Merrel: Thinking, 'Are we going to get there?' As CIO, I think it might be easier to talk about what didn't keep me up, but some key concerns were: did leadership truly understand and appreciate everything we were doing? Did the board truly understand and appreciate it? Would we achieve the ROI? These weren't small investments, and I had made commitments toward specific outcomes, so I had to make sure everything we were doing would meet those commitments.

Other concerns were centered on whether my team had everything it needed and whether I had covered all the bases from a security perspective. And finally, AI in general has gotten so much hype that it can be a nightmare. It's already out ahead of you -- you can't just 'get in front of AI' anymore. It's everywhere.

Tim Murphy is site editor for Informa TechTarget's IT Strategy group.

Dig Deeper on CIO strategy