Getty Images

Microsoft intros new responsible AI tools in Azure Studio

The new tools address one of the barriers of adoption for enterprises looking to use generative AI technologies. They show Microsoft's thought process on government regulations.

Microsoft introduced new responsible AI tools in Azure AI Studio aimed at reducing many of the hesitations enterprises have around generative AI systems.

The company on Thursday introduced Prompt Shields, groundedness detection, safety system messages, safety evaluations, and risk and safety monitoring.

The new tools come as the tech giant and its rival Google work to address challenges with their generative AI tools in recent months. For example, a Microsoft whistleblower wrote to the Federal Trade Commission detailing safety concerns about Microsoft Copilot Designer.

Meanwhile, Google turned off the image-generating feature of its Gemini large language model after it generated biased images of key historical figures.

Google also expanded its Search Generative Experience by allowing it to answer users' questions. However, there are reports that SGE's responses are spammy and prompt malware and scams.

Addressing enterprise concerns

The new responsible AI tools address concerns of enterprises that are hesitant to use generative AI.

One of the barriers to adoption of generative AI among enterprises today is trust -- a lack of trust in these systems.
Brandon PurcellAnalyst, Forrester Research

"One of the barriers to adoption of generative AI among enterprises today is trust -- a lack of trust in these systems," Forrester Research analyst Brandon Purcell said.

Many enterprises are concerned about hallucinations, where an LLM or AI tool generates incorrect information, and the fact that the tools are susceptible to intellectual property leakage.

"Microsoft is ... releasing products that are hopefully going to help generate trust in the market," Purcell said.

For example, the Prompt Shields feature detects and blocks prompt injection attacks. It is currently available in preview. Prompt injection is when a user with bad intentions tries to make the LLM do something it is not supposed to do, such as provide access to its training data or engage in hate speech or sexualized content.

Another tool, groundedness detection, helps detect hallucinations in model outputs. That tool is coming soon.

"Reducing hallucinations is probably one of the main seemingly unsolvable challenges in adopting generative AI for mission-critical business use cases," Gartner analyst Jason Wong said.

Since most language models tend to hallucinate, a tool that reduces hallucinations will be critical to enterprises.

"Groundedness detection should reduce the hallucination rate and give businesses confidence and trust that the system is working as it should," Purcell said.

Responding to regulations

Microsoft's new responsible AI tools also show how the vendor is responding to some of the new regulations coming out of both the European Union and the U.S., according to Northeastern University AI policy adviser Michael Bennett.

Earlier this month, the EU approved the EU AI Act. The act regulates AI systems that interact with humans in different industries, including education, employment and public systems.

Thus, having these responsible AI safeguards eases the minds of enterprises conducting business in the EU, Bennett said.

"These types of safeguards will probably put those larger companies at greater ease, [but] not erase the concern altogether," he said.

Enterprises will also feel comfortable using the systems in the U.S., where different state districts have introduced individual AI laws, Bennett added.

However, despite vendors' safeguards, enterprises must perform their due diligence, Purcell said.

"No matter how many great features Microsoft or other companies roll out, a company that is using generative AI needs to have a stringent monitoring system in place to be able to detect when the model is not performing and leading to poor business outcomes," he said.

Other responsible AI tools Microsoft introduced include safety system messages, safety evaluations, and risk and safety monitoring.

Safety system messages, coming soon, steer the model's behavior toward safe outputs. Safety evaluations assess applications' vulnerability to jailbreak attacks; the tool is available in preview. Risk and safety monitoring understands what model inputs, outputs and end users trigger content filters. It is also available in preview.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close