Laurent - stock.adobe.com
EU AI regulation guidelines worry vendors
The EU released AI regulation guidelines that demand AI systems in high-risk areas are transparent and have human oversight. Some vendors are worried.
As European Union officials released wide-ranging proposals to regulate artificial intelligence, big U.S. tech vendors scrambled to lobby against what they fear is government overreach that will stifle innovation.
The EU guidelines, published Feb. 19, also came as prominent AI vendors such as Alphabet, the parent company of Google, and IBM, have called for more regulation in recent weeks in apparent attempts to preempt government moves to strengthen oversight of AI and other technologies.
Meanwhile, Alphabet CEO Sundar Pichai, Facebook CEO Mark Zuckerberg and Apple's senior vice president for artificial intelligence, John Giannandrea, traveled to Europe earlier this week in advance of the release of the AI policy guidelines to meet with EU officials.
Facebook issued its own white paper on online content regulation, in effect telling EU regulators how they should regulate Facebook and other online content purveyors. But it was also a sign that Facebook, like Google, appears ready to cooperate with authorities in Europe on regulation.
The EU guidelines do not stipulate how or when AI regulations would be put into effect.
The guidelines, presented in a white paper and supplementary materials, say high-risk AI systems, such as those used in healthcare, law enforcement and transportation, should be transparent, traceable and placed under human oversight. The proposed regulations would apparently build on the data privacy regulations introduced in the EU's GDPR.
The EU regulators also said vendors should be required to provide unbiased data sets. They also called for an EU-wide debate over the use of facial recognition technology, which has become a major point of concern around the world among privacy advocates.
The EU initiative appears to reflect a desire among at least the European public not only for data privacy, but also for more controls on AI to ensure it is used fairly.
"Initiatives and guidelines today are currently anchored around data privacy, permissible use, prevention of bias and transparency in how decisioning is made," said Martin Sokalski, an AI expert and principal at KPMG. "It's clear when these decisions impact an end consumer or have a societal impact, that consumers want more visibility and guidance on how their information is used and whether these decisions are fair."
The EU AI regulation guidelines come a few weeks after the White House issued its own set of guidelines for businesses and government agencies in the U.S. Those guidelines, however, were notably more vague than the detailed 27-page EU document, and largely promoted the need for relaxed AI regulation that would not slow innovation.
"Authorities should be able to test and certify the data used by algorithms as they check cosmetics, cars or toys," the EU white paper says.
Impact on vendors
AI vendors, Sokalski said, appear to genuinely want regulation.
One of these companies is DataRobot, an AI and automated machine learning vendor headquartered in Boston. Ted Kwartler, vice president of Trusted AI at DataRobot, said the vendor welcomes calls for regulatory approaches that don't stifle innovation.
The EU's guidelines explicitly note the importance of developing effective legislation that does not suppress innovation, but vendors do not appear convinced that the regulations will find a balance.
Ted KwartlerVice president of Trusted AI, DataRobot
"The EU's approach does not take into account the open source nature of AI and ML innovation," Kwartler said. The AI community is global, with vendors and developers needing to collaborate to gain expertise, he added.
There can't be heavy restrictions on AI technology, Kwartler said. "At some point, the data, talent and capital will move away from the EU because dealing with this stifling regulation is not worth the burden," he said.
Investments and innovation
EU funding for research and innovation for AI now is about 1.5 billion euros, or about $1.6 billion, a year. According to the EU, that is far behind government funding for AI in North America and Asia. The EU needs to significantly increase its funding to reach 20 billion euros annually over the next decade, the AI strategy recommends.
To do that, the EU needs to increase its partnerships with technology businesses and educational institutions, according to the white paper, to help attract and retain talent. It also needs to work with enterprises to create incentives for accelerating deployment of AI.
"We welcome calls for partnerships with the private sector while also welcoming regulatory approaches that don't stifle innovation and believe there is room for both things," Kwartler said.
"One way the EU could partner with the private sector to accelerate AI advancement is by making data science more accessible to non-traditionally trained data scientists," he continued.