How Axis Communications navigates global AI regulation
As AI rules evolve, compliance grows more complex. CIO Jonas Hansson encourages IT leaders to assess data risk and track vendor sub-processors to stay compliant.
Axis Communications' CIO suggests the following best practices for deploying AI safely:
Build a common approach that works across regions instead of reacting to each region separately.
Use a risk-based framework that adjusts controls based on how sensitive each AI use case is.
Treat compliance as a competitive advantage.
Bring legal, engineering and business teams together to align on how AI systems are deployed.
Don't overlook sub-processors in vendor agreements.
Jonas Hansson, CIO of Axis Communications, a Swedish security systems company, sees AI governance as a competitive advantage rather than a compliance burden.
As AI adoption expands across regions, organizations face a mix of evolving requirements that can conflict with one another. This environment poses challenges, but Hansson does not view compliance as a constraint -- he sees it as a strategic differentiator that sets his company apart from competitors.
Hansson applies a consistent governance framework across regions and adjusts controls based on the risk level of each AI use case. He works closely across legal, engineering and business teams to align how the company builds and deploys AI. A key part of that alignment involves tracking how data moves through vendors and their sub-processors, since information can pass beyond the primary provider and introduce additional exposure.
In the following interview, Hansson explains how CIOs can navigate complex regulatory environments while deploying AI.
Editor's note: The following transcript was edited for length and clarity.
How would you describe the current AI regulation debate?
Jonas Hansson: The discussion centers on the speed and strictness of regulation versus the flexibility needed for experimentation and innovation. Enterprises need predictability, so many industry voices support clear, long-term and harmonized frameworks. I support scaling AI safely without navigating conflicting rules, but at the same time, conflicting rules are kind of unavoidable.
Multiple laws can govern how the same information is managed, yet they may contradict each other. As a result, organizations must navigate a complex landscape of state, national and even bilateral requirements.
At the same time, some legislation is increasingly risk-based, which is important for us. Risk-based legislation has helped us act as a value-driven company, being transparent about what we do and engaging with the trade-offs for our company, for individuals and for society. It also pushes us to examine key aspects of AI -- such as bias and fairness -- and how we address those issues in practice.
How do you handle conflicting regulations as a CIO?
In many cases, AI is probabilistic. We often need to make it more deterministic to avoid hallucinations.
Hansson: We employ staff in more than 50 countries and sell in 180 countries worldwide. So, our approach must work in a global context. We try to find a common denominator that goes beyond the regulation in a way that is workable in the long term and across many regions.
This requires us to guess where the world is headed, but it's also about reproducibility, which is challenging. In many cases, AI is probabilistic. We often need to make it more deterministic to avoid hallucinations. So, it's not just about legislation -- it's also about taming the tech.
Which AI regulations will most affect enterprises in the next 12 to 24 months?
Hansson: Both technology and tech-related legislation are evolving quickly, and the answer often depends on the markets a company operates in. We closely track U.S. state-level legislation, but we also monitor the global regulatory landscape.
In the U.S., there is a combination of state-level laws alongside federal efforts, including executive orders and broader attempts to coordinate AI policy at the federal level. Several states are introducing their own AI-related legislation -- for example, Colorado has focused on discrimination, California on transparency and Texas on governance. Many of these are enforceable laws that organizations must comply with.
Internationally, the EU has taken a leading role with the EU AI Act, which companies operating in the EU must follow. In the UK, legislation is being further refined, while countries such as Brazil are introducing frameworks that are similar in approach to the EU. Japan and China also have their own regulations. Globally, more than 70 countries have some form of AI policy, according to OECD data, although only a subset currently have binding laws.
The overall landscape requires continuous monitoring. What we see across many of these regulations is a move toward risk-based approaches, with an emphasis on principles such as transparency, accountability and alignment with privacy. These are the areas we focus on most closely.
What does risk-based regulation mean?
Hansson: It means assessing an AI use case based on its risk level and applying the requirements accordingly. For example, a mobile phone's face detection is typically low risk because it doesn't identify individuals. In contrast, using biometric data to identify a person poses a higher risk and is treated differently under frameworks such as the EU AI Act. The approach depends on how the AI is used and the potential effect of that case.
What role should the CIO play in preparing their organization for AI regulation?
Hansson: I think about the CIO's role in three dimensions. About 12 years ago, we started experimenting with what I'd call 'legal tech' or 'tech-enabled legal collaboration' -- bringing together legally skilled engineers and tech-savvy lawyers. We need both competencies. We first relied on this combination to manage personal data legislation, and the same skill set is relevant as we navigate AI regulation.
In that context, the CIO's role is to act as a chief enabler, translating regulatory expectations into scalable, secure technical architecture while ensuring legal teams understand their obligations. It's about connecting disciplines and making things work in practice across regions in a way that is compliant, transparent and trustworthy.
A second role is around risk awareness and education. That includes helping the organization understand the risks regulations are designed to address, identifying common risk patterns across regions and building internal awareness and skills. The goal is not to create hesitation, but to support adoption by helping people understand the complexity and use AI responsibly.
The third dimension is values. Technology reflects a company's values, and as a CIO, you act as a values ambassador. It's important to embed AI ethics into the organization's culture as an ongoing effort, continuously discussing ethics, priorities and accountability. Responsibility remains with us as an organization -- it's not possible to blame AI if something goes wrong. That's why we focus on human oversight and accountability.
How can CIOs ensure their AI initiatives are compliant without slowing innovation?
Hansson: It's a tricky question because slowing innovation depends on the type of innovation you're pursuing. Most companies are not training their own AI models but instead rely on third-party models. In that context, the key is to look closely at the information being used.
In practice, organizations often take a multi-model approach, using different AI models depending on what they are best suited for and what level of data can be shared. That's what we do. We use a range of AI models, deployed either in-house or in different environments, and match them to specific use cases based on their strengths and limitations, as well as the sensitivity of the data involved.
A stable regulatory environment is far more beneficial than one that is constantly changing or highly fragmented.
We start with the information itself and assess its sensitivity. Highly sensitive data is kept in-house, while less critical data may be used in cloud environments where appropriate. From there, we map the level of risk to the data and determine what is acceptable to use and where.
Experts expect AI regulation to become more stringent over time. How would this affect enterprise AI adoption?
Hansson: For us, predictability is what matters most. A stable regulatory environment is far more beneficial than one that is constantly changing or highly fragmented. Whether the rules are strict or not is less of an issue.
Are the constantly changing and conflicting regulations a nightmare to deal with?
Hansson: To some extent, yes, but I'd like to change the perspective on compliance. I think of compliance as a competitive edge. It's difficult to be good at, but if we are, it will give us the edge in so many markets. So, we aim not just to catch up with legislation, but to be ahead of it globally. This is one way we stay ahead as an innovation leader.
How should CIOs evaluate AI vendors to minimize compliance and regulatory risk?
Many companies miss the super unsexy part of agreements called sub-processors.
Hansson: CIOs need to take agreements seriously because they need to balance two protections. The first is the agreement risk -- whether agreements properly protect your data through limitations of liability, non-disclosure terms and clear usage boundaries. That's a contractual thing. The other half is looking at the real risk -- how you deploy technical protective measures to ensure information is encrypted, hashed or difficult to obtain.
Additionally, many companies miss the super unsexy part of agreements called sub-processors. Many vendors use sub-processors -- third-party companies integrated into their services -- and often want to change sub-processors at their own discretion. If your agreements don't account for how the primary vendor and its sub-processors handle your data, the information could be exposed further down the chain.
How can CIOs build cross-functional awareness for AI governance?
Hansson: Internally, we established an AI team focused on generative AI shortly after the emergence of tools like ChatGPT. From the beginning, we asked the entire company to get involved. We saw strong interest across the organization, so instead of having individuals work on this part-time in isolation, we asked them to contribute to a shared, global initiative. By pooling resources, we were able to build a much stronger capability through collaboration.
We also brought in ambassadors from every department and region to help shape priorities and guide what we build. This ensures that use cases come from across the organization and are relevant to multiple functions.
Tim Murphy is site editor for Informa TechTarget's IT Strategy group.