putilov_denis - stock.adobe.com

China's AI regulations face technical challenge

China's AI regulations ask for things that may not be technically feasible, said Russell Wald, policy director at Stanford University's Institute for Human-Centered AI.

China is one of the first countries to regulate AI algorithms -- a regulation that may prove technically difficult for businesses to adhere to.

China's AI algorithm regulations took effect March 1, requiring businesses to provide explainable AI algorithms and be transparent about their purpose, such as for recommending products or services. The new regulations also prohibit businesses reliant on AI algorithms from offering different prices to different people based on personal data collected.

Technical feasibility will be one of the most challenging aspects facing the new regulations, said Russell Wald, director of policy for Stanford University's Institute for Human-Centered Artificial Intelligence (HAI). Explainable AI has proven to be difficult for businesses to offer up.

In this Q&A, Wald talks about the AI regulations' impact on businesses, as well as how they're unlikely to affect the Chinese government's own use of AI when it comes to surveilling its citizens.

What do you think about China taking this step

Wald: What I find most interesting is China has done this first. We can see and watch in real time what happens to some of these businesses now being regulated for the first time. Questions remain about the technical feasibility of this. The question is, are we seeing regulation in theory or regulation in practice?

One key part is a continuing need for human-centered AI. 

Russell WaldRussell Wald

How do you define human-centered AI? 

Wald: AI that is inspired by the human brain and comes from what you've seen in human development. AI that doesn't replace humans but augments human capability. And the third part of that is AI that doesn't harm but rather benefits humanity … technology fostering instead of eroding democratic aspects. That's the key part. It's a recognition of the rights of everyone to be able to go in and look at these technologies and have fair access to them and the rights to challenge their governments when they're in use. 

How will requiring AI algorithmic accountability and transparency impact businesses?

Wald: There is the technical issue here of whether that's feasible. That's one layer that has a big question mark hanging over it. Then there's the aspect of following regulation where you could and being in compliance with that. To that end, whether or not those companies will be able to do it, I think they'll have to. I don't think they have a choice. 

The larger question is not whether Chinese businesses are able to comply with this, it is whether other international companies will now be affected by this. If you're selling into the Chinese market, you are going to have to work on that. That's where a unique tension is going to come through. What if China says, 'It is technically feasible'? Then there will be other countries that say 'No, it's actually not, we've had these problems.'

Will these regulations apply to the Chinese government's use of AI? 

The [Chinese] government will not cede, from my view at least, its surveillance of its citizens.
Russell WaldDirector of Policy, Stanford Institute for Human-Centered Artificial Intelligence

Wald: The government will not cede, from my view at least, its surveillance of its citizens. Because of that, I think it'll always have access to that technology. It's too tempting. It even remains tempting for local law enforcement in the U.S.

But here, at least in the United States, we're having a debate about an AI bill of rights. We're debating about facial recognition use by police, by the federal government at the border. There is this open debate. That debate is not happening in this case of the [Chinese] government's use of it. 

Do you think there needs to be more debate before moving forward with AI regulations? 

Wald: The EU or U.S. system of open debate around this is ultimately driving us in the right space. Not fast enough, but still a better pathway forward. What I do have concerns about in the U.S. is we are woefully behind.

You're starting to see an increase in legislation. You're starting to see a Federal Trade Commission that's trying to lean a little more forward on some of this. But I think the best way to have these dialogues and national debate is through legislation where all the interest groups, all the advocacy organizations, everyone comes to the table and hashes it out. 

Are there aspects of the Chinese legislation you'll be paying attention to, like how they handle consumer complaints about potential harms caused by AI algorithms? 

Wald: I mentioned that technical feasibility part, that's really important. 

I'm more curious about what the petition process will be. I'd like to watch that space more than anything. Is it significant, is it public, are a lot of people openly complaining? I would be curious to see the registration of these complaints. That would be important for when Europe or the U.S. starts to get into this model. What's the mechanism for them to be able to do this? If we all had a contact at the FTC or the Consumer Financial Protection Bureau, they'd be flooded and overwhelmed and unable to manage that process appropriately. The logistics of managing that has huge application towards ultimate effectiveness.

Editor's note: Responses have been edited for brevity and clarity.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Next Steps

National framework for AI research aims to boost innovation

Tech news this week: Fragmented AI rules and SD-WAN

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close