Getty Images

At MLconf, speakers stress need for responsible AI framework

Amid rising concerns about responsible AI, including recent calls for a pause in development, experts at MLconf urged caution and accountability when deploying AI systems.

NEW YORK -- Experts at MLconf NYC yesterday warned against overly rapid adoption of large and complex AI models, citing concerns about bias and inadequate regulation and oversight.

The conference took place shortly after the nonprofit Future of Life Institute released an open letter earlier this week, calling for a six-month pause in developing AI systems more powerful than GPT-4. Currently signed by nearly 2,000 AI researchers and tech leaders, including Elon Musk and Steve Wozniak, the letter has drawn mixed reactions from the AI community and was a topic of conversation throughout the conference.

Several presenters at MLconf voiced apprehension about the current pace of AI development, particularly in light of responsible AI team layoffs at Microsoft and Twitter. But overall, speakers highlighted current AI risks -- such as algorithmic discrimination and autogenerated misinformation -- rather than the more speculative fears about artificial general intelligence that have characterized recent conversations around large language models (LLMs) such as ChatGPT.

Responding to the AI hype

In particular, speakers expressed concern that the current hype surrounding LLMs has led to irresponsible and rushed AI development.

"There's not a healthy respect for the fact that AI itself has inherent risk associated with it, and I think part of that I attribute to all the hype around these new technologies," said Scott Zoldi, chief analytics officer at FICO. "It's the boards and the investors that hear that hype, and there's this fear of missing out -- FOMO -- with AI."

Many companies have had generative AI models for some time, but have hesitated to deploy them, Zoldi said, in part due to concerns about risk. He mentioned Google as an example: "They've had one for a very long time, but they have not felt comfortable putting that in the wild," he said.

Business pressures might now outweigh that hesitance as public interest in LLMs has exploded in recent months. Google recently debuted a range of generative AI capabilities, including the Bard chatbot as well as several features in the company's Vertex AI platform that were on display at the conference.

In the presentation "What's New with Generative AI on Google Cloud," Google senior developer relations lead Anu Srivastava demonstrated using Google's Generative AI Studio to automatically create a blog post for a hypothetical marketing campaign. Srivastava described instantaneous content generation as a beneficial business use case for generative AI models: "What is the point of building a machine learning model if you can't actually use it?" she said.

However, experts -- including those who've signed and those who've criticized the open letter calling for a pause to AI development -- suggest that companies should take a more cautious approach and avoid pushing systems into public use too quickly.

Srivastava said she'd only learned of the letter's existence that morning and hadn't "had time to think about it." She also declined to comment on how Google is addressing copyright issues in Generative AI Studio.

The need to create standards for responsible AI development

Several presenters at MLconf emphasized the importance of putting responsibility and social impact concerns first when implementing AI models.

To address these issues, speakers called for greater awareness of risk in the model development process as well as the creation of standards for responsible AI. "Many of these organizations take more care in the development of the software than they do in the development of the algorithm," Zoldi said.

In her presentation "ML Models Drive Actions to Accelerate Customer Centric Transformation," Christy Appadurai, senior data scientist at Lumen Technologies, advocated for a "centralized framework for developing machine learning models." In such frameworks, she said, ethical AI validation should be standardized as a part of the MLOps lifecycle.

Our job is not to impress each other with complicated models. Our job is to build tools that are useful, tools that are safe.
Scott ZoldiChief analytics officer, FICO

Zoldi suggested blockchain as a way to ensure auditable AI in his presentation "The Three Keystones of Responsible AI: Explainability, Ethics and Auditability." Creating a blockchain that tracks the steps involved in creating an AI model enables users to query previous versions and see what decisions were made in the development process. The blockchain's immutable record of those actions can help enforce compliance.

Although Zoldi acknowledged that the blockchain approach might not be right for every organization, he emphasized that any responsible AI framework should include similarly immutable records to promote auditability and interpretability.

"Our job is not to impress each other with complicated models," he said in his presentation. "Our job is to build tools that are useful, tools that are safe."

Using AI for social good

Matar Haller, vice president of data and AI at ActiveFence, emphasized AI's potential for positive social impact in her presentation "AI for Good: Detecting Harmful Content at Scale."

In her talk, Haller described how using AI models to detect hate speech and other abusive content is essential to managing the sheer volume of data in today's online environments. In addition, AI detection can reduce the amount of potentially traumatizing material that requires manual review, she said.

"These are human moderators," Haller said. "We don't want to flood them with more content than they need to see."

But AI still has limitations, Haller noted. Human oversight remains necessary throughout model development and retraining. "Even if we do everything right, and we work really hard at it, AI for content moderation is really, really difficult," she said.

Haller also pointed out in a separate conversation that, in addition to AI's limitations as a tool, generative AI models such as ChatGPT can be used for malicious purposes. Because LLMs can output high-quality misinformation at scale, they raise new risks in the realm of abusive content and hate speech.

Responsible AI isn't necessarily less performant

MLconf presenters emphasized that more responsible models don't need to come at the cost of performance and accuracy.

"Having a fair system leads to a better system," said Amey Dharwadker, machine learning technical lead at Meta, in his talk "Navigating the Landscape of Bias in Recommender Systems."

One concern that multiple speakers raised was the use of overly large and complex models, which are both computationally expensive and more difficult to interpret. In his talk "Hyperproductive Machine Learning with Transformers and Hugging Face," Julien Simon, chief evangelist at Hugging Face, suggested that smaller models might in fact have advantages over their larger, riskier counterparts.

"Smaller models are usually a better choice, because the business problem you're trying to solve is quite narrow," Simon said. In addition to being easier to explain and audit, smaller models fine-tuned to a specific task require less time and resources to train and often make faster, more accurate predictions, he said.

When deciding which models to implement, Simon cautioned against falling victim to the FOMO Zoldi described in the current AI market. "Don't believe the hype," he said. "Today's 'best' model will be superseded in weeks."

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close