Getty Images/iStockphoto

How Sikich balances GenAI innovation with security

Sikich CIO Scott Sanders explains how the firm uses well-defined governance, security controls and a focus on business-driven use cases to scale generative AI.

A single automation saved Sikich's team about 25 hours of work per year, one example of the productivity gains the company is seeing from generative AI.

At Sikich, a professional services firm based in Marlborough, MA, CIO Scott Sanders is applying generative AI (GenAI) and intelligent automation to improve productivity while maintaining tight security controls. Sanders avoids chasing new tools for their own sake, instead aligning IT around business priorities and identifying automation opportunities that eliminate manual work. At the same time, the organization has built an AI acceptable use policy, expanded AI training and created procedures to track how teams are using and building AI tools.

In the following interview, Sanders explains how Sikich selected its GenAI use cases, built its governance framework and scaled adoption.

Editor's note: The following transcript was edited for length and clarity.

What specific business needs or pain points led Sikich to prioritize GenAI?

Scott Sanders: Most of it came down to capacity and resource planning. As the company grows and our workload increases, we're always asking, 'How do we do more with the same?' That's been the strategy. Whether it's intelligent automation, GenAI or agentic AI, it allows you to look across the business and identify opportunities to build smarts into your processes using these tools. It lets you do more, faster, with the same number of people.

How did you decide which initial use cases to pursue?

Sanders: That started originally with a capacity issue. A certain area of the business was growing, both through acquisition and organically, and we found that we had a manual process that was a roadblock to us doing any additional work. Working alongside our administrative team, we looked more closely at the process and determined it was ripe for blending AI and document intelligence -- along with other types of automation and integrations -- to remove most of the human element.

So, the process we were spinning this AI around was extremely manual. An administrative person would look at a document, make sure it was signed, relocate it, save it and check a box. Until the release of some of these intelligence tools, we didn't have a great way to automate it.

How did you go about building an AI acceptable use policy?

Sanders: The core foundation has been the safe use of AI -- its security and what it can and cannot be used for. This protects us, our clients and their data.

The policy has evolved over time as we've become more comfortable with the security of certain platforms. Some restrictions have loosened based on that vetting, and we've also identified and approved additional tools, which are now included in the policy. It's a framework for how to use AI without discouraging its use.

How did you evaluate GenAI tools from different vendors?

It's not hard for a startup, a company or even an individual to create a demo that looks great but doesn't really do anything.

Sanders: AI tools are a dime a dozen now -- they're coming out of the woodwork. So, the challenge is understanding what they can and cannot do, because a lot of them pop up out of nowhere. It's not hard for a startup, a company or even an individual to create a demo that looks great but doesn't really do anything on the back end. You've got to weed those out.

However, we never start with the tool. We typically start with the need. We don't let the tool dictate what we do. We look across our processes, business units and how we serve clients, and that dictates where we go. We do a lot of piloting from a large language model (LLM) perspective. We use Microsoft Copilots, Claudes and ChatGPTs because each one is good at something different. We hone in on what we think we'll use long term.

We maintain an extensive list of what's out there. We ask, 'What should we be looking at? Does it fit a need?' Then we cycle through that list. This helps us stay ahead of what's best today and what will be best in the future.

How did you assess your organization's data readiness for GenAI?

Sanders: That conversation must revolve around security. Regardless of whether the data is good or bad, it comes down to who can see what. These AI tools will search everything you have access to and start providing responses.

If I'm a staff member and I've inadvertently been given access to a payroll file sitting somewhere in SharePoint, I may not even know I have it -- but a GenAI tool will. If I ask, 'Can you tell me what Tim's salary is?,' it could reveal that information. So, security is paramount.

If there are five versions of a document, which one did the model use when you asked a question?

Priority one is making sure your data is properly bucketized and that you're enforcing least privilege access -- things you should already be doing -- but this makes them even more important.

Outside of that, it comes down to data relevance. People get used to versioning -- version one, version two, version three -- but not all LLMs handle that equally well. If there are five versions of a document, which one did the model use when you asked a question? We're still working through that and  developing a framework. Tools like Copilot are strong with versioning because they understand the Microsoft ecosystem. Others, like ChatGPT or Claude, can be more challenged in that area.

It really comes back to two things: security first and making sure the most relevant data -- not outdated data -- is what the LLM can access. Once users start getting bad or inconsistent responses, adoption drops quickly.

What steps did you take to introduce AI training?

Sanders: We approached this on multiple fronts, and it served us well. We have a learning team called Sikich University that develops training materials and creates the annual training agenda. Every employee is required to take at least one AI class each year, but we offer other sessions throughout the year, including one-off demos. We also have AI office hours every Friday where folks can ask questions and share best practices.

Another piece is our AI registry, which helps us track what's being built across the organization and for what purpose. That visibility is important. If someone builds something for their team, it might be useful to others. A big part of training isn't just about how to use the tools -- it's about encouraging people to share what they're working on. What starts as something for one person might end up being valuable for a team, a department or even the entire company.

It's about constant reinforcement, much like security training. You keep security top of mind because it must stay front and center. The same applies to AI. If we want to be an AI-first company, we must keep AI in front of employees.

Where have you seen the biggest benefit so far in terms of productivity, efficiency or decision making?

Sanders: It's been personal productivity. Tools like Copilot, Claude and ChatGPT are great at getting you started on something -- such as creating a policy or drafting an email -- that might have taken hours to do before. Over the course of a year, that saves a substantial amount of time.

Then there's the project I mentioned earlier. We've saved administrative employees 24 to 25 hours of work over the last 12 months by removing a manual step from one of our processes. We're also looking at other GenAI projects across the company that will help with client deliverables -- aggregating data on the front end and doing first passes at things like diagrams and process flows.

What are the biggest challenges when it comes to balancing AI innovation with security and compliance?

Sanders: Non-technical people often have the perception that AI is magic -- but it's not. AI requires effort, training and knowledge of prompt engineering to produce tangible results. So, one of the challenges is that someone will come forward with an idea, regardless of security or governance, and say, 'I have an idea, and I think AI can do this.'

But it's not that simple. There's no magic here. It's another tool we're adding to our arsenal to help us do things quickly and effectively. So, we treat it like any other tool. Our same security standard, data governance and retention policies apply. The same vetting process for third-party tools or anything we build still applies as well. Just because it's AI doesn't change our overall approach to information security, data management or data cleanliness. It's simply another tool in the arsenal, whether we buy it or build it.

What resistance or concern did you encounter from your employees around AI?

Sanders: We had very little resistance, and I think that's partly due to the line of work we're in. We've always been a technology-first company. AI was just another tool, and it was natural for us to embrace it, both for internal use and for what we bring to clients. If anything, we probably had higher expectations out of the gate than we should have. We were more excited about what it could do than what it was capable of at the time, but that's caught up over time.

The only real hesitation we saw was around security, hallucinations and the risk of bad data or inaccurate results. That's where our acceptable use policy came in. It reinforced that these tools are powerful, but they're still starting points, not finished products.

In some ways, we were the reverse of many companies -- we were all in from the beginning and wanted to move quickly. The policies helped manage expectations and provided guidance as technology evolved.

What has kept you up at night since GenAI took off in 2023?

Sanders: Two things, really. Early on, when the models first came out, there weren't many guardrails or much security around how they were being used or what they were training on. My concern was that someone might take a document we created for a client, upload it to one of these tools and ask for feedback -- and suddenly that information becomes public. Then someone else, somewhere else in the world, could surface that same document. That's something that should never happen.

Those fears have eased over time as vendors have moved into the enterprise market and put more guardrails in place. That's helped, and I sleep a little easier knowing those protections exist.

The bigger concern now is over-reliance on these tools. At what point do the answers get so good that people start trusting them completely and treating them as finished products? Because even if you have policies and reinforce them, people get busy. There's always the temptation to say, 'I'll just have ChatGPT or Copilot put this together, skim it quickly and send it out.' It may be accurate; it may not -- but once it's out there with our name on it, it's our responsibility.

I'd build a bigger team faster so we could do more with it.

What would you do differently if you were starting Sikich's GenAI journey all over again?

Sanders: I'd build a bigger team faster so we could do more with it. There's no shortage of ideas for how we can use GenAI -- the bottleneck becomes who's going to build these agents or pilot the programs we think could have a substantial benefit for us and our customers.

There's only so much time in a day to evaluate, build, test, secure and validate. And people who are well-versed in AI are becoming harder to find. There's no shortage of work, so it becomes a question of what you outsource versus what you build internally.

Looking back, I would have started building that team earlier. I would have brought in people, even right out of school, and had them learn AI from the start. That way, we could have built more -- and faster.

Tim Murphy is site editor for Informa TechTarget's IT Strategy group.

Dig Deeper on IT applications, infrastructure and operations