New framework aims to drive ethical AI use in mental health

Spring Health has introduced an ethical AI framework for mental health amid an FDA committee review of generative AI digital mental health devices.

Shortly before a Nov. 6meeting of the FDA's Digital Health Advisory Committee to discuss how generative AI may be useful in psychiatric treatment, behavioral health provider Spring Health introduced a new ethical AI framework to drive use of mental health devices.

With close to half of U.S. adults, or 48.7%, having used an AI chatbot for psychological support in the past year, according to a study in Practice Innovations, safeguards are necessary to provide clinical oversight and offer regulatory oversight.  

 AI chatbots lack core elements for effective mental health therapy, according to Mill Brown, M.D., chief medical officer at Spring Health, which offers mental health solutions for employers and health plans.

"AI chatbots can provide basic information and support low-risk tasks such as customer service, data handling and administrative work. However, they lack the core elements that make therapy effective," Brown said.

He explained that most general LLMs are designed to maximize a patient's time using a chatbot. They reinforce what an AI tool "thinks" the user wants to hear rather than addressing mental health needs directly.

In October, Spring Health released Validation of Ethical and Responsible AI in Mental Health, a comprehensive open-source framework addressing the risks and opportunities that AI presents in healthcare, including AI chatbots for mental health. It aims to gauge whether chatbots and LLMs that provide psychological support conform to strict clinical safety standards.

"With more people turning to AI for mental health support, Spring Health knew this was going too far without any guardrails or a common standard for safety and performance," Brown said.

How VERA-MH was developed

Spring Health worked with clinicians, suicide-prevention specialists, ethicists and AI developers that comprise its  AI in Mental Health Safety & Ethics Council to create the framework.

"The VERA-MH framework establishes clear evaluation criteria to determine whether an AI system can recognize and respond appropriately to signs of crisis or suicidal ideation, escalate to a human clinician when necessary, and ensure transparency and clinical oversight throughout the user interaction," Brown says.

VERA-MH uses AI agents to evaluate a chatbot's conversation. A user-agent employs clinically informed personas to simulate a human interacting with the mental health AI tool, and a judge-agent scores the interaction between AI and patient.

"This ensures the evaluation captures not just isolated responses, but the quality, safety and progression of the full conversational exchange," Brown said.

A team of clinicians and AI experts developed the structure and criteria for the framework and then shared an early draft with the full AI council . The feedback helped strengthen the framework and ensured that it reflected real-world risks and best practices, according to Brown.

"Best practices for AI-enabled psychological support are still evolving, which makes transparent criteria even more important," he said. "VERA-MH evaluates systems using an open rubric that examines whether responses are actively harmful, clinically neutral or aligned with recognized best practices."

Nina Vasan, M.D., founder and director of Brainstorm: the Stanford Lab for Mental Health Innovation and member of the AI in Mental Health Safety & Ethics Council, said that this is the right time to set standards for AI in mental health.

 "AI is moving faster than regulation, so it's critical that we set clear standards now," Vasan said in a statement. "VERA-MH gives the entire industry a way to move forward responsibly and keep people safe."

FDA studies AI use for mental health

As Spring Health works on establishing guidelines around AI in mental healthcare, the FDA has also been evaluating the role of AI in this area.

During the Nov. 6 meeting, the FDA's Digital Health Advisory Committee found that GenAI could be useful for psychiatric patients but noted that humans are susceptible to AI outputs and the technology poses risks such as suicidal ideation monitoring or reporting, according to the Psychiatric Times.

Regarding AI use in mental health, the committee voiced concerns around ease of use, privacy and content regulation, and questioned the degree of involvement of healthcare providers.

AI-enabled devices could "confabulate, provide inappropriate or biased content, fail to relay important medical information, or decline in model accuracy," the FDA reported in its Executive Summary for the Digital Health Advisory Committee Meeting.

"The FDA DHAC discussion reflected a shared understanding of both the opportunity and the risk surrounding generative AI in mental health, which aligns closely with the work we are doing at Spring Health," Brown said. "There was clear emphasis on the need for risk-based oversight, transparency, and continuous lifecycle monitoring, principles that sit at the core of VERA-MH and the AI in Mental Health Safety and Ethics Council."

How VERA-MH could drive AI's continued use in mental healthcare

Brown sees VERA-MH as a "living framework" that can address new risks and opportunities as AI uses in mental health support matures.

Spring Health will publish its updates and validation results in early 2026 following market feedback. The company also plans to expand VERA-MH to other high-risk areas such as "self-harm, harm to others, harm from others, and support from vulnerable groups," Brown said.

The common standards around safety, ethics and performance that VERA-MH offers will help foster innovation in AI, he added.

"Because VERA-MH is an open and public standard, any company can use this standard and report on their scores for others to view," Brown said. "Until policy on state and federal laws catches up, it's on companies and clinical leaders to set high standards for safety, accountability and explainability."

As AI continues to be used in mental healthcare, Spring Health will invite feedback on VERA-MH from the global community. It has established a 60-day request for comment period to collect input from clinicians, researchers and AI developers on how to make the evaluation better and more robust. The deadline for feedback is Dec. 20.

Brian T. Horowitz started covering health IT news in 2010 and the tech beat overall in 1996.

Dig Deeper on Artificial intelligence in healthcare