Getty Images/iStockphoto

AI regulation stirs as unrestricted AI booms in China

Governments need to start regulating AI as the technology advances, experts say in part one of a three-part series on AI ethics issues around regulation, control and bias.

In the new world of fast-evolving technology, ever more powerful AI systems and currently minimal AI regulation, perhaps more AI isn't necessarily better.

Overinnovation can be a distinct problem, and it may be infecting the fast-growing and competitive AI industry, said Shawn Rogers, senior director of global enablement, digital content and analytic strategy at integration and analytics vendor Tibco Software Inc.

Businesses in just about every sector are deploying AI and intelligent automation in their workflows, and a growing number of vendors sell AI products and services. As AI users and AI vendors seek to beat the competition by offering the latest and the best, they risk overinnovating -- knowingly or inadvertently sacrificing safe and ethical practices to better meet the perceived needs of their clients.

An AI user, for example, may replace human workers with automated ones or use a recommendation system that provides more personalized choices to customers but collects and uses more of their information. Conversely, an AI vendor might train a machine learning model with biased data or create AI-assisted healthcare software that doesn't comply with the HIPAA privacy standards established under the Health Insurance Portability and Accountability Act.

President Donald Trump has now ordered U.S. government agencies to develop new regulatory approaches on AI. More oversight and defined AI regulation, while possibly slowing research and technology delivery, could help prevent such harmful overinnovation, some experts say.

The cost of progress

Unbridled research can yield significant benefits. In the AI field, it can breed smarter, faster and more advanced AI systems, like the ones being developed in China, which has few rules and regulations about the government collecting and using its people's data.

For China, the near-total absence of AI regulation has been purposeful. It has helped the Asian superpower quickly advance its machine learning, deep learning and AI technologies.

China has also subsidized and backed AI development in the private and public sectors, while, in the U.S., the federal government had been notably quiet about AI, whether signaling a willingness to regulate or to take part in and encourage AI work.

That is, until President Trump on Feb. 11 signed an executive order directing federal agencies to spend more money on AI research, promotion, training and regulation. What, if any, action various agencies will take on AI is yet to be seen, especially since the order didn't specify any funding for the efforts.

Under the so-called American AI Initiative, the administration is directing agencies not only to make AI an R&D priority, but also to expand access to federal data and AI models for researchers and help train U.S. workers on AI skills. In addition, the executive order calls on agencies to create a set of regulatory standards on AI development and use by businesses; it also taps the National Institute of Standards and Technology to lead the development of AI technical standards.

AI machinery
Regulating AI is becoming more important as use of the technology increases, regulation proponents say.

While China's rapid development of AI applications has raised concerns in the U.S. and other countries, some experts don't expect China to be able to keep up the allegedly predatory data practices that have fueled its AI developments in light of moves toward more data privacy protections in Europe and the U.S.

"As use cases come out of the Chinese market, they're going to have to align somewhat, especially around personal data, in order to do some business in the world," Rogers said.

Some countries have taken major steps toward regulating AI; the EU's GDPR law is an example. Still, most countries have a long way to go. Scientists around the world have called on their governments to ramp up AI regulation, but changes have been slow in the making.

At a national level, the U.S., in particular, has lagged in forming rules about data use, privacy and AI development, even as leading AI organizations and researchers based in the country have urged the federal government to establish basic guidelines.

The Google view

Google, in a 30-page white paper it published in January, noted that, while self-regulation in academia and private business has "been largely successful at curbing inopportune AI use," national governments and civic groups need to take action.

The document, "Perspectives on Issues in AI Governance," calls for rules on "explainability standards, approaches to appraising fairness, safety considerations, requirements for human-AI collaboration and general liability frameworks." It noted that clear policies at the international level could discourage harmful AI practices and use cases.

While the Google statement represents a willingness to accept more AI regulation by the tech giant, Google makes clear that, from its perspective, strict regulations, such as ones that could, say, restrict AI-developer profits or innovation, are excessive.

"Standards that are more difficult or costly to comply with could deter development of applications for which the financial returns are less certain," Google said. "Requiring the most advanced possible explanation in all cases, irrespective of the actual need, would impose harmful costs on society by discouraging beneficial innovation."

Regulating AI, piece by piece

Google appears to be in agreement with other AI experts that government or industry -- or whatever authority is eventually charged with regulating AI -- needs to enact rules and regulations to tackle each of the complex and constantly changing components of AI.

"It's an evolving still," Vic Katyal, global risk advisory analytics leader and data security leader for cyber-risk services at Deloitte & Touche LLP, said of AI.

There's no one silver bullet in managing AI risk.
Vic KatyalDeloitte & Touche LLP

To put it simply, Katyal said, AI is composed of data and the ability to gather information, machine intelligence and automation, data intelligence -- the ability to be curious and make decisions like humans -- and pieces still to be discovered as the technology advances.

Regulators should address each component and required controls separately, he continued, and then re-evaluate rules as the technology evolves. "There's no one silver bullet in managing AI risk," he said.

While AI technology is still a relatively field for R&D, early AI regulation could, according to some experts, prevent those using the technology from unleashing potentially harmful repercussions. "We need regulations that are much clearer than … today," said Luca De Ambroggi, senior research and analyst director at global information vendor and service provider IHS Markit.

The loss of jobs, the loss of humanity

Many machine learning and deep learning algorithms are extremely powerful already, De Ambroggi said, adding that this is only "the tip of the iceberg in what machine learning can do."

Meanwhile, the threat -- and, in some cases, the reality -- of employers replacing human workers with AI-controlled machines is omnipresent, even as industry leaders extol the virtues of AI and robotics in the workplace, including mitigation of unsafe working conditions and greater efficiencies and cost savings. In any case, training and retraining displaced workers will require much effort.

Also, global militaries are starting to imbue weaponry with AI technology, making destructive forces like missiles and drones more accurate and easier to deploy than ever before. Some tech industry players, including Google, have criticized the "weaponization" of AI.

Then, there's the fear -- far-fetched or not -- reflected in mainstream media and science fiction alike of AI-powered machines or robots becoming smarter, stronger and more dangerous than humankind, leading to the eventual and complete destruction of humanity.

"If you're looking beyond 50 years or so, I believe robots will be able to go beyond narrow AI to general AI," De Ambroggi said.

As opposed to narrow AI, a system that intelligently performs a dedicated task, general AI is a system that can intelligently perform essentially any task a human could do. Many current systems have achieved narrow AI. General AI, at least for now, is still the stuff of fiction.

Next Steps

American AI Initiative includes AI governance

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close