A lawyer's advice for CIOs creating AI-use policies

AI is transforming the way companies work in unprecedented ways, and CIOs must implement policies to protect their intellectual property, as well as that of their customers and partners.

Chief information officers play a critical role in shaping how organizations experiment with, scale and deploy AI. With AI adoption advancing rapidly in the enterprise, CIOs are moving beyond the proof-of-concept era into a time when AI tools and workflows are making real impacts across organizations and ecosystems.

As a result, it's crucial that CIOs understand the risks associated with using AI and protect their intellectual property by creating and implementing AI-use policies. In this Q&A, Alex McGee – patent attorney and partner at Howard & Howard PLLC – speaks to TechTarget about how CIOs should approach this task.

Editor's note: This Q&A has been edited for clarity and conciseness.

Can you start by telling us a bit about your background?

Alex McGee: I'm a partner at Howard & Howard Attorneys. We're a U.S. business law firm that works across a range of industries.

My primary focus is on intellectual property law. I'm a patent attorney, and in recent years I've developed a strong focus on AI. My work includes advising clients not only on patents and IP issues related to AI, but also on how businesses are using AI tools and how they're navigating the legal and operational challenges that come with them.

How are companies approaching AI policies?

McGee: It's really a mix. AI has become a major issue over the last three and a half years, and you definitely see some early adopters who jumped in quickly and formulated AI use policies. At the same time, there are still companies that are essentially burying their heads in the sand.

How seriously a company takes its AI policy often depends on the industry and the immediacy of the risks. In my opinion, every company should have an AI policy because the technology can be used in many ways and is advancing incredibly quickly.

Which industries are most exposed if they don’t have an AI policy?

McGee: Industries that deal with highly regulated information are the most exposed if they don't have clear policies. That includes sectors like financial services, banking, and healthcare — anywhere that handles sensitive data, such as personally identifiable information (PII) or, in the U.S., information covered under HIPAA.

Organizations that are concerned about protecting intellectual property, whether their own IP or their customers' and clients', should also stay on top of their AI policies. AI systems can create significant risks for regulatory compliance and IP protection if their use isn't carefully controlled.

From an intellectual property perspective, what are the biggest risks CIOs face when using AI?

The issue that worries me most is the potential leakage of trade secrets. Unlike patents, trade secrets can theoretically last forever if they're kept confidential. But if someone accidentally exposes that information – say, by entering it into an AI system – it could be lost.

Think of something like the formula for Coca-Cola. You would never want that entered in a system where it might be exposed to other users or retrieved by a threat actor.

Of course, most trade secrets aren't that dramatic. They might be sales lists, product specifications or internal processes - the "secret sauce" that makes a company successful. But uncontrolled AI use can expose that information if employees don't understand the risks. Business leaders and every employee must be aware of this.

Copyright is another area where AI use raises questions. What should CIOs be aware of?

McGee: We get questions about copyright constantly. AI tools can quickly generate text, code, images and videos, and companies want to know whether they can copyright those outputs.

Often, the answer is "probably not," or at least it's uncertain. The legal landscape is still evolving. For industries that rely heavily on copyright, such as publishing, journalism or media production, that uncertainty is a real challenge.

The other side of the issue involves companies developing their own AI systems. They want to know whose data they can legally use to train those systems without facing lawsuits. That's an extremely complex question that touches every area of intellectual property law.

How should CIOs take these risks into account?

McGee: The first step to mitigating these risks is simply recognizing that a meaningful policy is necessary, so if you don't have a policy, you need to create one, urgently.

Secondly, blanket bans on certain AI tools don't really work. When powerful AI tools are available for free on personal devices, it's unrealistic to expect employees not to use them. Instead, policies should be informed by a clear understanding of risks and benefits. Organizations should choose the safest and most capable tools for the tasks they want to perform.

For example, if IP protection or sensitive data is a concern, companies should avoid tools that allow external model training or unrestricted data sharing. They should also be cautious about vendors that retain or access company data.

Administrative oversight is also important. Companies should choose monitoring tools, access controls and accountability measures.

Finally, policies should be reviewed regularly because technology evolves so quickly.

What should CIOs consider first when they begin drafting an AI policy?

McGee: One of the most useful starting points is a thoughtful discussion about what the organization realistically hopes to achieve with AI, and what the real risks associated with this are. That sounds basic, but it's often overlooked.

Along with CIOs understanding how AI tools are expected to be used, they also need to look at how they are already being used - whether management has approved that use or not. In many organizations, leadership doesn't actually know what employees on the ground are doing with AI tools. That lack of awareness creates a completely different set of risks.

My advice is to gather perspectives from across the organization -- legal, IT, HR, management, R&D and others. Bringing multiple departments into the conversation helps ensure the company understands what's happening and how to build a policy grounded in reality.

Are conversations about AI policies happening at the board level?

McGee: It varies a lot. In some organizations, boards are taking it seriously and assigning the issue to a subcommittee, which then reports back with recommendations.

In other cases, the effort starts from the ground up. Someone in IT might notice employees frequently accessing tools like ChatGPT on company devices and raise concerns internally. In those situations, the board might not even be aware that the issue exists yet.

Do you recommend having separate AI policies for employee use versus broader business use?

McGee: That can be a very useful approach. Once a company understands its risks and has decided which tools it will allow, it often makes sense to create more specific guidelines for different roles.

For example, a company might decide that certain AI tools are appropriate for an R&D team but not for the sales department, because the risks and data involved are completely different.

In those cases, it's helpful to have clear "dos and don'ts" for employees based on their roles. That type of role-based guidance is becoming very common.

How can CIOs reduce the risk of employees accidentally misusing AI?

McGee: Along with a policy, training is one of the most important steps. And it shouldn't just focus on how to use AI tools; it should focus on how those tools can be used incorrectly.

Many employees don't realize when they're using AI in a risky way. For example, someone might think it's fine to use the same AI tool on a personal device at home for work-related tasks, when that actually creates major security risks.

Companies should also carefully select approved tools and create safe sandbox environments where employees can experiment without exposing sensitive data.

Other safeguards include limited data retention, private servers where necessary and ensuring that tools don't train models using company data.

Organizations should also integrate AI tools with data loss prevention systems. These systems monitor and prevent large-scale data transfers that could indicate a breach or misuse.

Ultimately, careful tool selection, training and monitoring are the best defenses.

How can companies keep up with changing AI regulations?

McGee: For international companies, one of the best things they can do is watch what the EU is doing. Europe is further ahead than most jurisdictions in terms of AI regulation.

The EU Artificial Intelligence Act is already partially implemented, and Europe has also done a good job integrating its existing GDPR framework with AI-related issues.

In general, companies should also look at their existing regulatory obligations. Even if those rules weren't written with AI in mind, they often still apply. AI might just make certain activities faster or easier.

In the U.S., the situation is more complicated because there isn't a unified national approach. There's a mix of federal and state guidance that sometimes conflicts. But over time, I expect U.S. law to move closer to Europe's approach, especially since many U.S. companies need to operate in Europe.

How can CIOs ensure their AI policies meet compliance requirements?

McGee: Legal liability and regulatory compliance is a real issue. In industries like healthcare and financial services, regulations may require human oversight or prohibit biased decision-making. AI systems can make those requirements harder to meet.

Organizations need to ask whether certain decisions should involve AI at all. Even if AI seems convenient, it may introduce risks if it removes required human judgment or introduces bias.

Regular audits are also important, and companies should make sure their vendors are trustworthy.

The key takeaway is that organizations should be proactive. Even as regulations continue to evolve, companies should follow recognized frameworks and best practices to demonstrate they’re acting responsibly.

Harriet Jamieson is a senior manager of custom content and writer for the IT Strategy team at TechTarget.

Dig Deeper on CIO strategy