berya113/istock via Getty Images
Trust concerns hinder agentic AI adoption, orchestration
Despite significant investment, most pharmaceutical and life sciences companies are struggling to bring agentic AI into production, a new report says.
Although pharmaceutical and life sciences companies have moved to embrace the promise of agentic AI in recent years, a majority aren't seeing the materialization of that promise, a new report reveals.
Over two-thirds of pharma and life sciences companies surveyed in Camunda's new agentic report say they see a gap between their agentic AI goals and what has been implemented at their organizations.
Released today, Camunda's report is based on research conducted last fall by independent firm Coleman Parkes, which surveyed 1,150 senior enterprise automation and technology leaders responsible for process automation decisions at U.S. and European organizations with more than 1,000 employees.
While a majority of the organizations polled say they use AI agents, only about 1 in 10 use cases reached production in the last 12 months, due in large part to trust concerns.
"The promise of agentic AI is undeniable, but trust remains the key barrier to adoption," Kurt Petersen, senior vice president of customer success at Camunda, said in a press release. "Right now, exercising caution with agentic AI means many organizations can't move beyond pilots or isolated use cases."
A vast majority of those surveyed are worried about the lack of transparency surrounding AI use, according to the report's data, with half believing that untamed agentic AI runs the risk of "fanning the flames" of poorly implemented processes and automation.
These widely held viewpoints hinder the adoption of agentic AI, the report highlights.
Slightly more than three-quarters (76%) say most AI agents at their affiliated organization are chatbots or assistants that can only answer questions and summarize text.
Compliance and integration are also major concerns for many pharma and life sciences organizations, the survey report points out.
Two-thirds of respondents cite compliance as a concern for deploying AI agents, and half say that their AI agents are not integrated into end-to-end business processes. Instead, they operate in silos.
"Without clear guardrails and visibility, [AI] agents will stay at the edge of the business," Peterson added. "Once a foundation of trust is in place, agents can become powerful multipliers inside governed processes instead of siloed co-pilots or chatbots."
More than 90% of life sciences and pharmaceutical organizations reported higher business growth over the past year after introducing process automation, representing a 3% increase from the year before, the report notes.
On average, organizations have automated nearly half of their processes, with many respondents signaling that this number could potentially rise in the coming years. The survey results reveal that roughly 3 out of 4 companies plan to increase spending on automation, with budgets expected to grow by an average of 18% over the next two years.
Meanwhile, technology stacks are being spread across more services, making operations more complex. This has resulted in a rapid increase in both the number and variety of endpoints, a large majority of the respondents say.
To address this pain point, most respondents are asking for better tools that can manage how processes overlap, highlighting an ongoing gap that agentic orchestration may be able to fill.
"Agentic orchestration, not standalone agents, is the key to closing the AI vision-reality gap," Petersen suggests.
Agentic orchestration coordinates multiple specialized AI agents within a single system, transcending the limitations of traditional chatbots and knowledge retrieval systems by offering a comprehensive suite of capabilities.
The data suggests that most companies may be willing to embrace agentic coordination as a new operating model. But 4 out of 5 respondents say that their processes are not yet mature enough to support agentic orchestration.
"Deterministic orchestration has always established structured guardrails. By blending it with dynamic orchestration patterns to leverage reasoning across AI agents, people and systems in end-to-end processes, enterprises can build a foundation for AI agents they truly trust." Peterson concluded.
Alivia Kaylor is a scientist and the senior site editor of Pharma Life Sciences.