Getty Images/iStockphoto

Enforcement remains a concern for EU AI Act

The EU AI Act is approaching finalization, and while there could be enforcement challenges down the road, businesses need to prepare.

The pre-final text of the European Union's AI Act leaked online Monday, revealing the EU's long-debated plans for AI regulation and raising concerns about its enforcement mechanisms.

The EU AI Act aims to promote the EU's "human-centric approach to AI and being a global leader in the development of secure, trustworthy and ethical artificial intelligence," according to a consolidated 258-page version of the EU AI Act text posted on LinkedIn by Laura Caroli, senior policy adviser at the European Parliament.

The anticipated adoption of the EU AI Act this year means businesses must start planning for compliance, but Gartner analyst Avivah Litan questions how the EU plans to enforce these rules.

Litan said she's glad the EU is advancing regulation, particularly as tools including generative AI have begun to worsen existing issues such as disinformation. However, with the proliferation of AI, Litan said the EU will run into issues trying to regulate all the different AI models out there.

"The enforcement is going to be really difficult," she said.

EU AI Act will face enforcement issues

The EU's approach to regulating AI includes categorizing AI models as unacceptable risk, high risk, limited risk and minimal risk.

The enforcement is going to be really difficult.
Avivah LitanAnalyst, Gartner

While the EU might not run into many challenges outright banning AI models in the unacceptable category, including untargeted facial image scraping from the internet for facial recognition databases, Litan said it remains to be seen how the EU will grapple with regulating models in the other categories.

It's going to be "impossible to regulate" high-risk algorithms, she said.

The EU plans to set up an AI Office specifically to oversee the EU AI Act. However, consistency in enforcement across member states might be a challenge, as seen with previous laws such as the GDPR, said Ashley Casovan, managing director of the International Association of Privacy Professionals' AI Governance Center.

"The idea of having one kind of central office that will help to support and provide some of that consistency will be really interesting," she said.

Financing for the AI Office and supplying the necessary resources and expertise for overseeing general-purpose AI systems and enforcement coordination could also be an issue going forward, Casovan said.

Five things to know about the EU AI Act.

Businesses need to plan for compliance

The EU AI Act will apply to any business with AI models affecting EU citizens, meaning it doesn't have to be a model operating solely in the EU.

There are strict requirements in particular for the high-risk category, which could be an AI-enabled system for job recruiting or border patrol. Meanwhile, minimal-risk AI models won't face stringent requirements, but will be asked to abide by a voluntary code of conduct to ensure models are safe and unbiased, Litan said.

"The first thing companies need to do is figure out what category they're in and try to get into the minimal-risk category if they can," she said. "Be prepared to make your systems more transparent."

Indeed, organizations must compile an inventory of their AI systems, and begin designing their own processes to build and execute an approach for classifying AI systems and assessing the risks of use cases, Forrester Research analyst Enza Iannopollo said.

Iannopollo said it will take a lot of trial and error before organizations find an approach they're comfortable with to deliver on the requirements of the EU AI Act, and the time to start working on it is now.

"There are resources available to support companies in these endeavors, but not official tools or approaches," she said.

The EU AI Act will also prompt enterprises to invest more money and resources into proving that they're adequately addressing bias and fairness, Gartner's Litan said. Companies will also have to avoid prohibited AI uses, such as social scoring, which involves surveilling consumer behavior to form a profile.

"It's going to be pretty disruptive to companies like Meta," Litan said.

Once the EU AI Act is adopted and published in the Official Journal of the European Union, it will enter into force 20 days after publication, according to the leaked EU AI Act text. The law will then apply in phases: within six months to AI models in the unacceptable-risk category, 24 months for general-purpose AI systems and 36 months for high-risk AI models. The EU AI Act will be implementing fines for noncompliance.

One of the positives of the EU AI Act is that it regulates use cases rather than the technology itself, meaning risk management is at the core of the regulation, Iannopollo said.

"This also means that, to a certain extent at least, the requirements will remain relevant despite the speed at which the underlying technology continues to evolve and change," she said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Next Steps

Everything you need to know about the new EU AI Act

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close