Everything you need to know about the new EU AI Act

The European Union's new AI Act defines AI regulations based on risk and outlines hefty fines for noncompliance. Explore the details of the AI Act and how it could apply to you.

The new European Union Artificial Intelligence Act establishes a framework for technology vendors and organizations operating within the EU, outlining how they should be building, implementing and governing their AI practices. The rules start going into effect in six months, and -- like GDPR, which had a major effect on how organizations managed data in the EU -- there will be steep penalties for noncompliance.

On March 13, 2024, the European Parliament adopted the AI Act, considered the world's first legal framework for AI. The legislation establishes EU-wide rules on data quality, transparency, human oversight and accountability. With stringent requirements and potential fines of up to 35 million euros or 7% of global annual revenue -- whichever is higher -- the AI Act is set to profoundly impact many companies conducting business in the EU.

The European Commission issued its AI Act proposal in April 2021. With the recent European Parliament vote, the legislative process is nearly complete. The AI Act will go into effect 20 days after its publication in the Official Journal, which is expected in May or June 2024. Most of its provisions will become applicable two years after the AI Act goes into effect. However, the provisions concerning prohibited AI systems will take effect after six months, and those regarding generative AI will apply after 12 months.

How does the Commission define AI?

The AI Act's definition of the term AI is inspired by the widely accepted Organization for Economic Co-operation and Development definition. It focuses on two key characteristics of AI systems: (1) They operate with varying levels of autonomy, and (2) they process input data to generate outputs, such as predictions, content, recommendations or decisions, that can influence physical or virtual environments.

The European Commission emphasizes the distinct nature of AI compared to other computer systems in the following statement: "The capacity of an AI system to infer transcends basic data processing and enables learning, reasoning, or modeling. The term machine-based refers to the fact that AI systems run on machines."

The full text from Article 3(1) reads as follows: "'AI system' means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

Who does the AI Act apply to?

The AI Act applies to providers of AI systems, meaning companies that develop and market AI systems or provide such systems under their own name or trademark, whether for payment or free of charge. Additionally, the AI Act covers importers and distributors of AI systems within the EU. Importantly, it also extends to "deployers," defined as natural or legal entities using AI under their authority in their professional activities.

Where does the AI Act apply?

The AI Act has a significant extraterritorial effect, as it applies to providers who introduce or operate AI systems in the EU market, regardless of where they are established or located. It also governs providers or deployers established or located outside the EU if their system's output is used in the EU. The AI Act specifically pertains to deployers, importers and affected individuals in the EU, with not much clarity on distributors.

The AI Act excludes AI specifically developed and used for scientific research and development. It also does not cover any research, testing and development activity related to AI before it is placed on the market or put into service. However, this exemption does not apply to real-world testing. In addition, the AI Act does not apply to systems released under free and open source licenses unless such systems qualify as high-risk, prohibited or generative AI.

The EU approach to AI regulation

The AI Act relies on a risk-based approach, which means that different requirements apply in accordance with the level of risk.

Unacceptable risk. Under the AI Act, certain AI practices are prohibited and considered a clear threat to fundamental rights. In this category, the legislation lists AI systems that manipulate human behavior or exploit individuals' vulnerabilities -- e.g., age or disability -- with the objective or the effect of distorting their behavior. Other examples of prohibited AI include biometric systems, such as emotion recognition systems in the workplace or real-time categorization of individuals.

High risk. AI systems identified as high risk will need to comply with strict requirements. These include risk-mitigation systems; high-quality data sets; activity logging; detailed documentation; clear user information; human oversight; and a high level of robustness, accuracy and cybersecurity. Examples of high-risk AI systems include critical infrastructures, such as energy and transport, medical devices, and systems that determine access to educational institutions or jobs.

Limited risk. Providers must ensure that AI systems intended to interact with humans directly, such as chatbots, are designed and developed in such a way that individuals are informed that they are interacting with an AI system. Typically, deployers of AI systems that generate or manipulate deepfakes must disclose that the content has been artificially generated or manipulated.

Minimal risk. There are no restrictions to minimal-risk AI systems, such as AI-enabled video games or spam filters. Companies can, however, commit to voluntary codes of conduct.

General-purpose AI models and generative AI

A chapter on general-purpose AI models was added to the AI Act during the negotiations. The legislation now differentiates among general-purpose AI models, a subcategory of general-purpose AI models with systemic risk, and general-purpose AI models with high-impact capabilities.

The AI Act's relationship with GDPR

EU law on protecting personal data, privacy and the confidentiality of communications will apply to processing personal data in connection with the AI Act. The AI Act does not affect the GDPR Regulation 2016/679 or the ePrivacy Directive 2002/58/EC -- see Article 2(7).

As with any emerging market, the maturation of the AI market led to the development of regulations and standards, serving as guidelines to steer the market in a positive direction. The AI Act represents the first set of AI governance rules that other countries are likely to follow with their own modified versions. Equally important will be self-regulation by organizations and market-driven guidance from consumers, who will influence the industry by determining what products they find favorable, trustworthy and accurate as AI becomes more prevalent in their work and personal lives.

Stephen Catanzano is a senior analyst at TechTarget's Enterprise Strategy Group, where he covers data management and analytics.

Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.

Dig Deeper on AI business strategies

Business Analytics
Data Management