your123 - stock.adobe.com
BOSTON -- As governments around the world, including in the United States, find their footing on AI technology and ways to regulate it, privacy and AI governance continue to be significant concerns among AI developers, researchers and vendors.
President Joe Biden on Oct. 30 signed an executive order that will establish "new standards for AI safety and security, protect Americans' privacy, advance equity and civil rights, [and stand] up for consumers and workers" while promoting innovation and competition.
The order comes as other governments explore ways to regulate the technology amid the new wave of generative AI.
Meanwhile, AI experts, including developers, thought leaders and regulators, gathered in different parts of the world to discuss AI safety, privacy and governance.
The United Kingdom's first AI summit was held on November 1-2, while the International Association of Privacy Professionals (IAPP) started its two-day AI Governance Global (IAGG) 2023 conference on Nov. 2.
The lack of federal regulation
While the executive order has received good reviews in the first few days since Biden signed it, it doesn't address one significant challenges the U.S. is experiencing with AI regulation, some IAGG attendees said.
"It's a good first step," said Ian Van Heyst, vice president of IT security and data privacy at real estate company FirstService Corp, in an interview at the conference on Nov. 2. "It will form how people think about AI regulation, so it will move the needle."
However, even with the executive order, there's a need for more federal guidelines, Van Heyst said.
"This patchwork of states having to comply with the different regulations -- the net result will be that companies will choose to comply with the high watermark," he said. This might mean complying with the California Consumer Privacy Act or another state's regulations.
The state-by-state approach to regulation of AI privacy issues in particular in this country is "a nightmare," said Jacqueline Baillet, global product counsel at The Knot Worldwide -- a media and technology company that provides content and tools for wedding planning -- in an interview.
In her daily work, Baillet said she constantly considers privacy to protect consumers' information.
The Knot Worldwide is also becoming familiar with some new generative AI technologies.
Currently, the company is testing an LLM chat feature with select users. With that, it's working to make sure the chat capability meets best practices for transparency requirements.
The global wedding company was also an early user of Google's Duet AI generative AI collaboration service, using it to communicate and collaborate across different teams and locations.
Baillet said she sees the executive order as a good step but one that could quickly be done away by a different administration.
"From my perspective, I don't know if it's very business savvy to rely on that," she said.
Federal legislation "would lend itself to be more in line with what's going in the European Union since their approach is on a federal level," she said, referring the EU's new AI Act and the General Data Protection Regulation.
The next best thing
While federal regulation would be preferable, the current turbulent state of the U.S. government and leadership -- with the House of Representatives going three weeks without a speaker, for example -- shows that state-by-state regulations on AI are the next best thing, Connecticut state senator James Maroney said during a Q&A session at the conference.
In 2022, Maroney helped in an effort that resulted in the passage of a data privacy bill in Connecticut.
Ian Van HeystVice president of IT security and data privacy, FirstService Corp
"I don't see how something as complicated as AI regulation could be passed legislatively," Maroney said. "In order to really see the benefits of AI going forward, we need to be able to prove to people that it's safe and secure and [needs] to start earlier on so [systems] don't get adopted [incorrectly] by people so that we can fully see the benefits."
Having states work together to put out similar regulations is the next best thing to a federal law, he added.
The executive order itself addressed issues that can't be dealt with at the state level, such as biometrics, defense contracting and nuclear facilities, Maroney said in an interview.
The order's emphasis on mitigating privacy risks potentially exacerbated by AI could also be the impetus states need to move on regulation, he said. "You would hope it would spur more states to act if not the federal government to actually pass legislation."
Biden's executive order still provides guidance that many can follow, said Ashley Casovan, managing director at IAPP AI Governance Center.
The document provides guidance on enforcement of standards for the use of AI systems, third-party auditing and accountability of the systems, she said.
"It's really important that there are good guardrails set so that we're all speaking the same language [as well as] requiring the same types of mitigation measures for ensuring the safety and trustworthiness of these systems," Casovan added.
Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.