Laurent - stock.adobe.com

Hundreds support former Google AI ethics researcher Timnit Gebru

A prominent AI ethics researcher said she was unfairly fired by Google after sending an email to colleagues critical of the way Google treats women and people of color.

Hundreds of Google employees, along with academics, nonprofit leaders and tech industry executives, have signed a letter supporting Timnit Gebru, a prominent AI ethics researcher who said she was fired by Google. 

In a series of tweets on Dec. 2, Gebru alleged the tech giant fired her for writing an email to colleagues that was critical of the way Google treats women and people of color after the vendor demanded she remove her name from a research paper she had co-authored.

Outpouring of support

Many rushed to Gebru's support after her Twitter posts, and, in an open letter published late Thursday on Medium, a growing number of signatories have said they stand in solidarity with Gebru. 

"Until December 2, 2020, Dr. Gebru was one of very few Black women Research Scientists at the company, which boasts a dismal 1.6% Black employees overall," the letter says. 

Yet, the letter continues, "Instead of being embraced by Google as an exceptionally talented and prolific contributor, Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing."

Gebru's supporters included a list of demands for Google Research leadership, including an explanation of why Gebru's co-authored paper was rejected and a commitment to define clear guidelines on how AI research will be reviewed in the future. 

As of publication, 591 Google employees and 823 academic, industry, and civil society supporters had signed the letter. 

According to reporting by NPR, Gebru's rejected paper discussed concerns that a language AI tool used by Google contains bias, and could help create and spread hate speech. 

Potential for good, and misuse

Analyst Alan Pelz-Sharpe said that while he doesn't personally know the facts behind Gebru's departure from Google, AI tools that can automate speech and text have great potential but can also be used to spread disinformation.

Google's GPT-3, likely the most powerful text generation system, can generate highly convincing human conversational speech, said Pelz-Sharpe, founder and principal analyst at Deep Analysis.

"It's impressive technology, but though it could be used to improve, for example, automated customer service, gaming or any number of benign human to computer interactions, it could also be used to misinform, defraud and mislead," he said.

All AI has bias, Pelz-Sharpe noted.

[Bias is a] given, and a very well-known though often unaddressed challenge in AI.
Alan Pelz-SharpeFounder and principal analyst, Deep Analysis

"It's a given, and a very well-known though often unaddressed challenge in AI. Even so, few want to publicly discuss it as in many cases they have no idea how they could, in practical terms, address those biases," he said.

Google's response

Google has denied the allegations and has said Gebru gave the company a list of conditions and threatened to resign if it did not meet them. 

In an email to staff on Dec. 3 from Google's head of AI, Jeff Dean, published by news site Platformer, Dean said there has been a lot of speculation and misunderstanding about the incident on social media. 

Gebru, he asserted, has had dozens of research papers approved by Google. The paper in question, which Gebru co-authored with four other Google employees and some external collaborators, was submitted before Google could review it, Dean claimed. 

After reviewing it, "the authors were informed that it didn't meet our bar for publication and were given feedback about why," he said. Dean claimed that the paper ignored relevant research and didn't include recent progress made to mitigate bias in language models. 

Google declined to comment.

Next Steps

Cloudera Foundation merger to bring AI and data to nonprofits

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close