Getty Images/iStockphoto

Tip

AI and compliance: Which rules exist today, and what's next?

The AI regulatory landscape is still racing to catch up with the fast pace of industry and technological developments, but a few key themes are starting to emerge for businesses.

AI regulation and compliance is a complex and fast-evolving topic, especially due to variations in AI compliance rules across different industries and jurisdictions. No one can predict how AI regulations might change in the future, and figuring out which regulations apply to AI today can be a real challenge for businesses hoping to implement and manage AI systems.

While robust AI-focused compliance mandates might not exist yet, jurisdictions such as China and the European Union are starting to design AI-specific regulations. In addition, certain laws already in place have clauses that regulators are currently interpreting for AI.

With varying and sometimes vague rules, it's evident that AI regulation and compliance is uncharted territory. Still, it's possible to identify overarching trends. AI and machine learning engineers, as well as businesses seeking to use AI, can implement measures such as data privacy and transparency today to help ensure future AI compliance.

Current AI compliance rules

Regulations and compliance mandates that affect AI can be broken into two categories: those that focus on AI specifically, and those that affect AI but were not specifically designed for it.

AI-centric regulations

As of early 2024, few compliance laws designed specifically for AI are in effect. Perhaps the only major example is China's Interim Measures for the Management of Generative AI Services, which regulate how generative AI can be used in the country.

However, China's Interim Measures are relatively generic and nontechnical. They encourage AI developers to ensure that content produced by generative AI is "positive, healthy, inspiring, and morally wholesome," per Lexis China's translation, but they do not attempt to govern the specific design or operation of generative AI models.

Meanwhile, EU legislators are working toward landmark AI regulation in the form of the AI Act, which was introduced in 2021 but is not yet in effect. This law would ban the use of AI in the EU for certain purposes, such as facial recognition of people in public, and impose security and privacy requirements for AI-based apps in other contexts.

The U.S. currently lacks anything resembling a national AI compliance framework, even in draft form. However, in October 2022, the U.S. federal government published a Blueprint for an AI Bill of Rights, which lays out principles designed to protect individuals against the misuse of AI. Although this guidance could influence future AI legislation in the U.S., it currently does not impose any specific requirements related to AI.

Regulations and compliance laws that affect AI

Most existing regulatory rules that affect AI come from laws that, although not initially designed for AI, have implications for how AI can be used.

One key law is the EU's GDPR, which has been in effect since 2018 and imposes a variety of requirements related to data privacy and security. Because AI models rely extensively on data for training and operational purposes, the GDPR has important consequences for AI.

For example, the GDPR includes a provision that grants individuals the right to be forgotten by having their personal data released. As a result, a business that uses customers' personal data to train its AI models would need to ensure that the business can remove data associated with specific customers upon request.

Another example is the California Privacy Rights Act (CPRA), which went into full effect in 2023. The CPRA includes language requiring certain businesses to disclose when they use algorithms to make automated decisions about people. CPRA regulators are currently discussing how to interpret and enforce that rule as it applies to AI and machine learning. At a minimum, it appears likely that businesses subject to the CPRA will need to state whenever they use AI to make decisions that affect people.

Emerging themes in AI regulation

Because few AI regulations are currently in effect -- and most that are remain subject to interpretation -- it's too early to say exactly how AI compliance will affect businesses at a large scale. However, existing rules and draft legislation suggest that a few key themes will dominate.

Disclosure is key

No matter how a company uses AI, simply stating that AI technology is in use will likely be an important component of AI regulation. Although regulations such as the GDPR and CPRA don't impose specific mandates on how AI can be used, they do require companies to state that they are using it, at least when it affects individuals.

AI risk level affects regulations

Some AI regulatory frameworks, such as the EU AI Act, take a risk-based approach to regulation. AI use cases that regulators deem high risk, such as those that affect healthcare or personal privacy, are subject to stricter regulations than lower-stakes AI deployments, such as personalized product recommendations on a retail website. This means that regulators might require different protections for different AI systems, depending on which use cases the systems support.

AI laws lack technical specificity

To date, no major AI regulatory framework or draft rule has attempted to define technical parameters for designing AI systems. There are no compliance mandates about which types of AI models are acceptable, for example, or which frameworks AI and machine learning engineers can use. Based on current frameworks, future regulations seem likely to allow any technical approach if it conforms to high-level privacy and security mandates.

Regulations don't distinguish among types of AI

Similarly, almost no existing laws distinguish among different types of AI, such as generative AI and predictive AI; China's mandates on generative AI are an exception. Instead, most regulations aim to apply the same rules to all types of AI, which means that the type of AI system a company chooses is not likely to affect which AI regulations it needs to comply with.

Enforcement remains unclear

Many AI regulations specify maximum penalties; for example, companies that violate the EU AI Act could face fines of up to 35 million euros. But the key phrase there is "up to," and it remains unclear how aggressively regulators will sue violators and enforce penalties. Some observers have complained about lax enforcement of laws such as the GDPR, and it's possible that AI-focused regulations will similarly impose rules backed by limited action.

The bottom line on AI regulation

For companies seeking to take advantage of AI, it's not clear how exactly AI will be regulated. The situation could look very different a year or two from now, as more compliance laws go into effect and early enforcement of those laws produces precedents.

For now, it seems a safe bet that businesses will need to prioritize data privacy and transparency, especially in cases where they use personal data to help train AI models or drive automated decision-making. In addition, because different AI use cases could be subject to different levels of regulation based on risk, businesses that operate in industries that regulators consider high risk -- such as finance and healthcare -- will probably face stricter requirements than others.

Chris Tozzi is a freelance writer, research adviser, and professor of IT and society who has previously worked as a journalist and Linux systems administrator.

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close