How a CIO guides agentic AI with structured governance

Rimini Street's CIO explains how he deployed agentic AI for research and service -- and how an AI steering committee governs access and risk.

Executive summary

  • Rimini Street uses agentic AI for 360-view customer research and ticket analysis.
  • Agentic AI helps Rimini Street and its clients go to market quicker.
  • An AI steering committee ensures proper governance at Rimini Street.
  • CIO advice: Plan for production, get ahead of governance and maintain data quality.

As agentic AI acts autonomously across business processes, organizations are beginning to adopt it -- and grappling with how to keep it under control.

At Rimini Street, an enterprise software support provider, CIO Joe Locandro has deployed agentic AI internally across research and service. His organization has also built a rigorous governance model with a dedicated AI steering committee at the center. In the following interview, he explains how structured policies, access controls and ongoing oversight let his organization -- and its clients -- scale agentic AI while managing legal, privacy and operational risk.

Editor's note: The following transcript was edited for length and clarity.

How is agentic AI different from generative AI?

Joe Locandro: The church of AI is very broad. It started with machine learning (ML), where machines interrogate data and look for patterns. Then we got into generative AI (GenAI), where the ML and analytics could predict outcomes.

The next evolution was agentic AI. While GenAI is extrapolating, predicting and going through data, agentic AI is performing an action or a process. People used to call this robotic process automation. Agentic AI is the next evolution of that, and now it's doing tasks either fed by generative predictions or automating previously manual ones.

How are you using agentic AI internally at Rimini Street?

Locandro: We have a project called 'Deep Research' that gets our AI agents to span across multiple systems, including Salesforce, our financial system and ServiceNow. If I wanted to learn more about a customer's company, I could send the agent to give me a 360-degree view of how we interact with that customer. It'll tell me how the customer has interacted with sales, how much they've spent and which tickets they've submitted.

The agent also searches the internet for relevant press releases and annual reports and combines them with internal data. This lets us do hours of research at the push of a button.

We also use agentic AI to analyze problem resolution for the most common issues in our ticket system worldwide. We can see if common issues have emerged elsewhere and offer quicker rectification.

On top of that, we made the decision as a company to ubiquitously deploy Microsoft Copilot across our organization globally. Sales teams are using it for productivity -- optimizing their sales calls, going through their emails and attaining a customer brief.

Have you seen any measurable benefits from agentic AI deployment?

Locandro: It's hard to track productivity because if you're saving employees 30 minutes a day, people generally don't log their daily activities. However, as we've been deploying agentic AI into customers' workflows -- whether for sales automation or inventory management -- the agents are over 90% accurate. We're also finding up to a 40% increase in benefit -- whether that's in cash, productivity or another measure.

As agentic AI spreads across different teams and tools, how do you keep usage from getting out of control?

Locandro: We have an AI steering committee internally, which includes representatives from HR, IT, legal and business. We're seeing an influx of both external and internal tools, and the whole organization is benefiting as a result of that. However, governance is key.

You can't just let external tools run around and grab things and then go out, because you have things like privacy, legal liabilities and AI bias. You've got to put guardrails in place. That's why we have a steering committee to ensure that anything we introduce internally is properly governed.

How does the AI steering committee operate?

Locandro: Anyone who wants to introduce or buy AI software must fill out a form on our portal. They must list the benefits, what they are trying to achieve and offer a brief description of whether the software works off-the-shelf or requires custom development.

That form goes through a working group of representatives from the steering committee, which then assesses it from a legal, HR and buyer's perspective. It's then recommended back up to the steering committee or rejected because of legal or privacy concerns. Then we either adopt the software or develop it.

However, prior to submission, we had developed a whole governance policy for all staff. We had developed training that stipulated that no one got Microsoft Copilot unless they went through the training to understand our governance policy -- the dos, don'ts, dangers and benefits of AI. Then we followed up with cheat sheets.

Finally, we have additional governance for model development. We have a register to periodically check whether models drift, since data changes over time, and algorithms can keep running even as outcomes degrade.

How does governance change as more employees start building their own AI agents?

Locandro: We are going deeper into this brave new world for many of our client companies in terms of governance. Who do you let have access to develop an agent that goes into a database or system? And how do you ensure that they know what they're doing?

We're now working through very structured governance that says there are probably three categories of people who want to develop agents: IT department or IT developers who are very familiar with the do's and don'ts and the constraints of getting into data. Then you have super users in the business community, and everybody else who just likes using Copilot and mucks around. You will need different access rights for each of those segments, and you will need to ensure that, once you've got hundreds of agents being developed, they don't cause problems elsewhere in the IT environment.

CIOs will grapple with this governance once they start letting people develop scripts and agents, and it's going everywhere. Do you give them read access, or read and write access? Or who can write to a database and add something that an agent developed? What if the feeder system picks it up and accidentally reports on it?

Governance is evolving, and CIOs need to be ahead of the curve because soon enough, they'll have hundreds of agents running around autonomously, hitting systems and doing things. They'll need orchestration and governance to ensure they manage risk effectively.

Has your organization reduced its employee head count as a result of agentic AI?

Locandro: No. One of the benefits of AI -- the lowest hanging fruit -- is productivity gains. When I speak to most CEOs and COOs, speed in doing an activity is the biggest benefit for them. If they don't increase speed to market, their competitors will. So, we're seeing agentic AI increase the absolute speed of doing tasks at scale. That's the main driver of agentic AI.

The second driver is not necessarily reducing jobs but allowing functions and people to do much more within the same constraints. As a growing company, we can't keep adding more people to the back office, but agents let corporations scale and redesign jobs.

We haven't done it internally, but I'm seeing it in parts of Japan and Europe where you used to be able to do one task, hand it off to another person, then hand it off to another -- agentic AI can redesign those three jobs into a single new job.

Is there anywhere that you deliberately chose not to roll out agentic AI?

Locandro: Yes, our AI steering committee does that. If we think the source data isn't very accurate and leads to erroneous outcomes, that's probably not a good use case, because an AI use case is only as good as the data. We rejected one vendor product that looked good, because another company was suing them for copyright infringement. We do reject some if there's a legal or privacy issue.

We rejected another one because people wanted to use it to assess employee performance, but you can't use AI to assess whether a particular person is doing a good or a bad job. You can use it with anonymized data, but not on an individual level to reward or terminate someone based on what AI says.

What advice do you have for CIOs who want to deploy agentic AI?

Locandro: The difference between a proof of concept and putting it into production is very different. When you do proof of concepts, don't think it's going to be cheap and cheerful to put it into production. You may need to normalize the data, automate the interfaces, etc. CIOs must convey to business users that production differs from a short sample.

CIOs must also get ahead on governance, because AI moves at 100 miles an hour, and you can't be doing policy on the fly. You must be proactive, consider the next step -- the next evolution -- and have the guardrails ready to let people and employees take advantage of AI's benefits.

The third one is to look at data integrity and quality, because that will likely be a limitation as you get more finessed. And of course, don't fear AI.

Tim Murphy is site editor for TechTarget's IT strategy group.

Dig Deeper on Risk management and governance