Federal report focuses on AI diversity and ethics

A national group formed to advance the research and development of AI in the U.S. proposes ways to add more variety among students, educators and researchers studying AI.

The national task force to research how to advance and democratize access to AI research in the U.S. recommends diversifying the talent pool of AI researchers such as students and Ph.D. holders.

The National AI Research Resource (NAIRR) Task Force released its final report On Jan. 24. The National Science Foundation and the White House Office of Science and Technology Policy formed the task force as part of the National AI Initiative Act of 2020. It includes members from the government, academia and the private sector.

The task force was formed to determine whether the U.S. needs a shared research infrastructure that supports AI students and researchers through computational resources, data and educational tools.

The report comes as the U.S. aims to regain its foothold as the leader in AI. Most recently, countries such as China have stood out in the AI R&D race.

To change this competitive landscape, the task force report shows how the U.S. can manage a NAIRR through a proposed budget of $2.6 billion over six years. The NAIRR would be used primarily by students, AI researchers and educators who want to incorporate AI tools into their teaching. Part of the goal of the NAIRR would be to increase the diversity of talent in AI and advance trustworthy AI.

The diversity landscape

Resources that fuel AI research and development are currently available to researchers. Despite the increase in computer science doctorate recipients specializing in AI and more undergraduate students receiving computer science degrees, the field lacks diversity, the report says.

Of 442 doctorate recipients in 2020, 51% were white, 30% Asian, 7% Hispanic and 2% Black, according to the report.

Meanwhile, only about 20% of AI PhD and computer science PhD graduates in North America in 2020 were female.

To address this disparity, the task force proposed creating a group to oversee the NAIRR as well as incorporate diversity and equity within the organization.

Screenshot of image presented in report
The Task Force proposes an operating entity that will focus of diversity as part of its operating standard.

Making diversity a priority

One aspect of diversity that the task force proposes is reducing barriers such as financial hardship to participating in the research and development of AI. The task force suggests exposing students to AI early and exposing more AI software tools to those already learning about AI.

"It's no secret that AI, and more broadly the STEM field, has a diversity problem," said Kashyap Kompella, an analyst at RPA2AI Research. "The NAIRR mission of making the field of AI more accessible, representative and responsible is a worthy goal and helps the U.S. cement its pole position in AI."

A lot of problems that come up with AI can be solved if there's a diverse group of people working on the models and systems, according to Gartner analyst Svetlana Sicular.

It's not only about bias but about a variety of perspectives.
Svetlana SicularAnalyst, Gartner

"The reason for diversity is very specific to AI because a majority of AI is biased," she said, noting that bias can not only be against a specific protected group but also an age bracket or a personality group. "It's not only about bias but also about a variety of perspectives."

The task force also wants to increase the access that people of different backgrounds have to AI tools and resources.

Specifically, the group recommends that researchers and AI students have access to computing resources (including at least one large-scale machine learning supercomputer capable of training one trillion-parameter models), government and non-government datasets, support, and training.

"Offering resources is good for R&D innovation and solving societal problems," Sicular said. "However, in some cases, resources might be counterproductive.

"Instead of learning and innovating, students might run unnecessary cycles, hoping that a merely brute force solves a problem," she continued. "Formulating where AI can make a difference cannot be done by specialists in machine learning."

Solving societal problems

Therefore, having academia and industry leaders working together could lead to the discovery of problems too massive for the industry leaders to solve alone, such as predicting earthquakes, Sicular said.

For example, the creator of WhatsApp came up with the idea because of he did not have regular communication with his family due to the price of calls as a teenager, she noted.

Focusing on diversity at the student level is also helpful for the workforce, said Natasha Allen, partner and co-chair for AI at Foley & Lardner, an international law firm.

"For many of my clients, in terms of finding the talent that can understand, develop [and] interpret the algorithms, it's a very small population," she said. "So the more people you are educating, getting up to speed in terms of what is required to be an engineer in the AI sector -- it just broadens the capability of, again, diversifying who is part of this ecosystem."

The task force also said the NAIRR can promote responsible AI with an Ethics Advisory Board for issues of ethics, fairness and accessibility to people of varied gender, ethnic and financial backgrounds.

The success of NAIRR's responsible AI goal will come down to who's on the ethics advisory board, Allen said.

"You have to make sure that those boards have people who have a variety of experiences," she said. "It has to be a living and breathing document. It can't just be static."

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close