Getty Images

FTC pursues AI regulation, bans biased algorithms

The agency tries to regulate how businesses use AI algorithms by enforcing the Fair Credit Reporting Act, Equal Opportunity Credit Act and FTC Act. Critics want more regulation.

As AI makes dramatic inroads in enterprises, the U.S. government has quietly started to regulate the use of AI in the consumer credit industry and other areas by banning the use of biased and unexplainable algorithms in decisions that affect consumers.

In recent years, the Federal Trade Commission has tried to regulate AI in lending with laws that are already in place, chief among them the Fair Credit Reporting Act (FCRA). The federal agency has also included AI regulation under the FTC Act and the Equal Credit Opportunity Act (ECOA).

While the federal laws don't contain explicit language regulating AI, the FTC, which enforces the laws, has issued guidance over the last two years stipulating that under the FCRA, lenders can't use biased or unexplainable algorithms for not only consumer credit, but also employment, housing, insurance and other benefits.

The FTC has also clarified that the sale or use of racially biased algorithms, for example, is a deceptive practice banned by the FTC Act.

The agency's stance is the first concrete effort by the U.S. government to regulate AI. However, in Europe, where the GDPR has regulated how businesses use data and software since 2018, the European Commission in February published a wide-ranging proposed framework for AI regulation.

The FTC seeks to regulate businesses that use AI algorithms for consumer credit, housing and employment.
Federal Trade Commission guidance banning use of biased and unexplainable AI algorithms.

The FCRA

Chief among the federal laws that the FTC has designated as applying to AI algorithms is the FCRA. 

Established in 1970, the FCRA protects information collected by consumer reporting agencies. These includes banks, credit bureaus, mortgage lenders, medical information companies and tenant screening services.

The law bans companies from giving information about consumers to anyone without a legitimate reason to have it. It also requires credit, insurance and employment agencies to notify consumers when there is an adverse action is taken against them, such as rejecting a loan application.

More than 50 years old, the law does not directly address AI.

"When it was written in 1970, people weren't contemplating AI," said Peter Schildkraut, a lawyer specializing in technology at the Arnold & Porter law firm.

Only in more recent years has the FTC begun to apply language and tools to AI and companies that use data extensively.

In an April 2020 guidance blog, "Using Artificial Intelligence and Algorithms," the FTC warned businesses that use AI that the agency can use the FCRA to prevent the misuse of data and algorithms to make decisions about consumers.

In the blog, Andrew Smith, director of the FTC's Bureau of Consumer Protection, wrote that "the use of AI tools should be transparent, explainable, fair and empirically sound." Smith also advised organizations to validate their algorithms to ensure they are unbiased and to disclose key factors their algorithms use to assign risk scores to consumers, among other things.

Also, the FTC guidance noted that the agency also enforces the ECOA -- which prohibits credit discrimination based on race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance -- as well the employment provision of the Civil Rights Act of 1964. Smith's guidance extended the FTC's authority to enforce the prohibition of AI-based discrimination against such "protected classes."

The guidance also specified that while some companies that use data to make loans or furnish information to agencies that make decisions about consumer credit, employment, housing, and government benefits may not know or believe they need to comply with the FCRA, such companies must comply with the FCRA.

The FTC did not respond to a request for information about cases it has acted on involving misuse of AI algorithms.

However, in a 2018 action against RealPage, a real estate software company that uses software tools to match those looking for housing to criminal records, among other things, the FTC claimed RealPage didn't take proper steps to ensure the data it provided to landlords and property managers was correct.

The FTC said that from 2012 to 2017, RealPage's data wrongly matched some applicants to criminal records that did not belong to them. This meant some tenants were denied housing and other opportunities. The company paid the federal agency $3 million to settle the charges.

Limitations of the FCRA

While the RealPage case involved the FTC regulating how companies use data, if not algorithms specifically, the FTC is constrained by the limits of its own authority.

The FCRA is definitely not a comprehensive AI regulatory framework.
John DavissonSenior counsel, Electronic Privacy Information Center

"The FTC is a law enforcement agency," Schildkraut said, noting that it can only act on complaints or discoveries of violations. "It's not providing prescriptive regulation saying you need to do 'X, Y, Z' in order to comply with the law because the United States doesn't have a generally applicable statute saying that if you're using artificial intelligence, you've got to have a bias impact statement or any other particular feature."

However, Schildkraut added that this approach fits with what the FTC has done in the past to enforce other provisions of the law -- responding to complaints about alleged violations such as misleading consumers or failing to provide disclosures and harming consumers.

"The FCRA is definitely not a comprehensive AI regulatory framework," said John Davisson, senior counsel at the Electronic Privacy Information Center. "It's not imposing clear fairness obligations or requiring companies that use AI to validate the tools that they're using or even imposing nondiscrimination requirements."

Kashyap Kompella, analyst at RPA2AI research, said in a previous interview with SearchEnterpriseAI that in the absence of regulation, many organizations fail to meet the core principles of AI ethics: safe, accountable, transparent and trustworthy.

"The current stage in the industry is that these are self-regulatory," Kompella said in the interview.

Because oversight is currently self-regulatory, many companies turn to AI as a "cheap and easy solution to all their problems," Davisson said. "There may be lots of situations where AI is not appropriate and shouldn't be used at all."

The most effective way for consumers to be protected against harmful AI-based determinations about them is comprehensive regulation, Davisson said. Without meaningful regulation, companies do not feel the pressure to ensure that the tools they're using are free from bias.

"The incentives aren't there; the regulations and enforcement aren't there," he continued. "The problem is not going to fix itself."

"It's something that especially the Federal Trade Commission and state attorneys general and other state agencies need to focus on and to address soon because the problem is just getting bigger every day," Davisson continued.

What more AI regulation looks like

Real regulation of AI can only be accomplished with AI-specific regulation, according to Davisson.

"Congress is uniquely situated to impose these sorts of limits on AI use, and it should do so immediately through legislation," Davisson said.

In May, Sen. Edward Markey (D-Mass.) and U.S. Rep. Doris Matsui (D-Calif.) introduced the Algorithmic Justice and Online Platform Transparency Act. The proposed legislation seeks to prohibit harmful algorithmic processes on popular websites that determine what consumers see online, but does not target algorithms in consumer credit, housing or employment.

In addition to legislation, Davisson said Congress must give "the agency charged with regulating AI significant rulemaking power and ample funding to carry out its work."

He said this should be a new independent data protection agency but could also be the FTC with expanded authority. Davisson added that there also should be a private right of action so consumers can go to court to protect their rights.

However, while there are still no specific AI laws or regulations, Davisson said the FTC already has the power under its unfair and deceptive trade practice authority to limit the unfair use of AI. He noted that budget reconciliation package moving through Congress now includes an additional $1 billion in funding for the FTC over 10 years.

"That's a major boost to the FTC's work, which will certainly focus in part on AI," Davisson said.

But regulation needs to be fair to both regulators and companies using AI, according to Kompella.

While companies should not "push the harms [of AI] under the carpet," Kompella said widespread AI regulation could have unintended consequences, so regulators must be fair.

He said other regulatory authorities may want to follow the EU's model of AI regulation . The EU's proposal categorizes AI applications as high, medium and low risk. Low-risk AI involves the use of AI in applications such as video games or spam filters; medium-risk includes AI systems such as chatbots; and high-risk involves AI used in infrastructure, employment, and private and public services.

"That's a good start," Kompella said. "Low-risk AI can perhaps do with self-regulation and voluntary declaration of compliance with industry standards. High-risk AI can be more stringently regulated with internal and external AI audits and certifications," he said.

One approach is when a technology vendor plans to release a new high-risk AI product, it could submit a pre- or post-release impact assessment.

"Regulations should not be too onerous, but they should lead to the desired behaviors by those regulated," said Kompella.

The role of AI vendors

While government agencies play an important role, the problems of bias and developers not being transparent about data could be fixed in many cases by AI vendors themselves, some say.

Private AI, a vendor funded by M12, Microsoft's venture arm, created a tool that redacts irrelevant personal information from the data that enterprises use.

According to CEO Patricia Thaine, the biggest problems machine learning engineers face is understanding data and how the data affects the models.

"It's a different kind of training when you have to figure out what pieces of information are affecting modeling, and then shifting gears to figure out how to make unbiased decisions," she said. "That's something that a lot of engineers are not trained on."

As AI becomes more and more regulated, machine learning engineers will have to be educated more about how data collection works and how that methodology affects models, Thaine said. The industry is also shifting into working on more tools for detecting bias.

"It is very difficult to determine what pieces of information combined together with your training on millions of individuals are affecting the output of a model," Thaine continued. "But it is incredibly significant and has to happen in order for these algorithms not to control our lives without us not knowing what is happening."

Upcoming AI regulation

Schildkraut said that while AI regulation seems probable in the future, at this time the FTC can only try make sure that companies are using AI algorithms fairly with measures such as the FCRA.

"I think the Federal Trade Commission is looking to fill the gap where there's sort of a vacuum of supervision," he said.

Next Steps

Efforts to craft AI regulations will continue in 2022

Tech news this week: AI, decentralized apps and ransomware

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close