metamorworks - stock.adobe.com

Microsoft unveils responsible AI guidelines and dashboard

The tech giant unveiled 10 guidelines for organizations to consider when building AI models, and a dashboard that data scientists can use to ensure their models are ethical.

Microsoft said it wants to make it easier for organizations to use and build AI technology responsibly.

During its "Put Responsible AI into Practice" digital event on Dec. 7, the tech giant, with Boston Consulting Group, released 10 guidelines that product leaders can use to implement AI responsibly, without bias and with visibility into the intentions of AI and machine learning algorithms.

Guidelines and a dashboard

Enterprises can use the guidelines before, during and after the process of building AI models.

Microsoft outlines the guidelines in a three-step framework that starts with using transparent processes to assess and prepare the model and weigh potential risks and benefits.

The next step is design, build and document. At this stage developers are building the AI model and looking at the impacts of the product. The final step is to validate and support the model, and test it to ensure it works in the way it should work, without ethical bias.

Microsoft also introduced a new Responsible AI dashboard for data scientists and developers.

The dashboard runs on the Azure Machine Learning service and includes responsible AI tools such as interpretability, error analysis, and counterfactual and causal inferencing.

Screenshot of keynote speech during the Microsoft and Boston Consulting Group event
During a virtual event for customers, Microsoft and Boston Consulting Group released 10 guidelines product leaders can use to build ethical AI models.

A move toward responsible AI

With the guidelines and dashboard, Microsoft appears to be going beyond just articulating responsible AI, said Svetlana Sicular, an analyst at Gartner. The vendor is also giving enterprises the tools to implement it, she said.

"This is the first set of guidance supported with the dashboard -- a package -- that says this is how you go beyond principles," Sicular said.

Meanwhile, many organizations, including Gartner, have their own set of principles for responsible and ethical AI, Sicular noted.

It's in everybody's interest to have an honest and clean name for AI.
Svetlana SicularAnalyst, Gartner

However, the question is how enterprises can go beyond the principles and provide the software tools to enable responsible AI, a problem most organizations that use AI face.

"It's in everybody's interest to have an honest and clean name for AI," Sicular said. "AI has a big problem. This is a problem of trust. This is a problem of doing the right thing."

Mistrust surrounding AI often is not intentional, Sicular continued.

Data scientists who put algorithms and data together often aren't able to ask all the questions that account for all applications.

In many cases, it's not their specific responsibility. Rather, it's the job of product leaders to think about all applications of AI and machine learning models.

Microsoft, along with Google, has stood out in the early development of rules for how to use AI responsibly, Sicular said.

Google, in recent years, has faced criticism for bias in some of its AI technology, as has Facebook.

"Microsoft has a tradition of dealing with the enterprise," Sicular added. "I feel like this guideline is another step for them to support the enterprise."

Sicular said the next step for vendors such as Microsoft is to make responsible AI ubiquitous and applicable in different industries.

The vendor's AI guidelines are now available to download free on the Microsoft Azure website. The Responsible AI dashboard is also generally available in open source.

Dig Deeper on Enterprise applications of AI