Blue Planet Studio - stock.adobe

New AI ethics advisory board will deal with challenges

Created by the Institute for Experiential AI at Northeastern University, the board will help organizations without internal audit boards but will face some challenges.

A global AI ethics advisory board is aiming to fill the need for guidance for organizations that lack the means or ability to provide ethics oversight of the AI technology they use.

The board will be housed at the Institute for Experiential AI at Northeastern University in Boston, an academic organization that creates AI products that use machine learning as an extension of human intelligence.

Introduced on July 28, the board consists of 44 experts from multiple disciplines across the AI industry and academia.

Board members will meet twice a year to discuss ethical questions surrounding AI. Some of the members will review applications submitted by organizations that want their products reviewed for ethical guidance.

The new group is similar to an Institutional Review Board, which is federally mandated in industries such as healthcare, and biomedical research and clinical trials. Because the government does not require organizations to maintain an AI ethics board, organizations are essentially self-regulated.

A board like this one is a welcome step because currently only large organizations tend to have AI ethics boards, said Kashyap Kompella, an analyst at RPA2AI Research.

"Northeastern's initiative can help democratize access to AI ethics expertise," he said. For a review board like this to be effective, however, it must be able to change "the what and how of AI product design, development and deployment if there is a breach of responsible AI principles."

Logo of AI ethics advisory board
The Institute for Experiential AI at Northeastern University introduced an AI ethics advisory board on July 28. The board consists of more than 40 experts from different countries and backgrounds. CREDIT/SOURCE: Institute for Experiential AI

Gaps and challenges

One of the gaps that can exist in an AI ethics board like this is between ethics and compliance, said Nigel Duffy, a machine learning and AI engineer and former top AI executive at global accounting firm EY.

Many organizations building and working with AI products and technologies often face disconnects between the practitioners who use the products for business and compliance teams.

"One of the challenges today is that those two constituencies aren't necessarily well connected," Duffy said. "They don't have the right skill set to talk to each other."

Bridging the gap between practice and compliance is essential for AI ethics.

Another challenge for an ethics board like this is that because it's a third-party group, some companies might not want to approach it because ethics topics such as whether an AI system or algorithm is biased toward or against a specific gender, economic group or race, can be sensitive, Duffy said.

Many organizations might want to keep those discussions in-house.

A potential very important role they can play is providing a connection to impacted communities.
Nigel DuffyMachine learning and AI engineer

Diverse group of people

Also, while the AI ethics board may have members from different countries and some private companies, it is essentially an academic group.

"A potential very important role they can play is providing a connection to impacted communities," Duffy said, referring to people and groups subject to AI bias or algorithmic discrimination.

Diversity was a key factor in assembling the board, said the board's co-chair, Ricardo Baeza-Yates, director of research at The Institute for Experiential AI's Silicon Valley campus in San Jose, Calif.

"We have gender diversity, we have geographical diversity, we have type of industry diversity," Baeza-Yates said.

While the board will make recommendations to organizations that bring ethical concerns, the organizations may decide not to follow those recommendations, he said.

"The main goal of the board is to have the opportunity to [ask] the right questions and get the right answers. And then they're on their own," Baeza-Yates said. However, if the AI technology is risky, the board may decide to publicize its recommendations.

What needs to be audited?

However, not all types of AI technology needs to be audited, said Alan Pelz-Sharpe, analyst and founder at Deep Analysis.

In transactional use cases like reading a number or word from a form or document, the AI typically won't hold any form of bias and so there's no need for an audit. The need for an audit comes into play when the AI technology makes a decision about a person's finance, health or freedom, Pelz-Sharpe said.

Moreover, it's hard to audit AI systems that have been operating for years, he added.

"The challenge is that few AI systems are designed to operate unethically, rather they can be trained or used to break ethical boundaries, oftentimes unknowingly," Pelz-Sharpe said. "Until it's doing something in operation it's hard to know if it's doing the right or wrong thing."

To avoid this, transparency should exist about how and why an AI made its decision, he said. However, certain complex applications of AI technology such as neural networks or deep learning -- in which the AI technology acts like a human would -- could make transparency difficult, which is why guidelines are needed.

"Ethical guidelines for the design and implementation of AI are in my opinion much needed," said Pelz-Sharpe. "Clear directions to guide potential users would be very helpful."

The AI ethics board hopes government organizations can adopt laws based on some of the work it does.

"This is filling in a place where maybe certain governmental things have fallen short or not quickly enough or outright failed," said Momin Malik, board member and senior data science analyst of AI ethics at the Mayo Clinic's Center for Digital Health.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close