sdecoret - stock.adobe.com
Pennsylvania vs. Character.AI: What the lawsuit signals for the AI legal landscape
The state of Pennsylvania sued Character.AI, alleging that its chatbot violated the Medical Practice Act by posing as a licensed psychiatrist.
Character.AI, a conversational AI platform that enables users to create custom personas for entertainment, is at the center of a lawsuit with potentially far-reaching implications for the AI legal landscape and significant ethical and privacy concerns for patients trying to manage their health in the era of AI.
Pennsylvania Gov. Josh Shapiro's administration filed the lawsuit, alleging that Character.AI's chatbot impersonated a doctor and violated the Medical Practice Act, a state law that prohibits individuals or entities from presenting themselves as medical professionals without proper licensure.
At the center of the lawsuit is an interaction with one of Character.AI's personas called "Emilie," a psychiatrist bot that said it attended medical school at Imperial College London and has been practicing for seven years. When asked whether it was licensed in Pennsylvania, it said it "did a stint in Philadelphia for a while," and provided a fake medical license number.
According to the lawsuit, there have been more than 45,500 user interactions with Emilie on Character.AI as of April 17, 2026. Research shows that seeking AI for health advice is increasingly common. A 2026 KFF poll found that approximately one-third of adults have turned to AI chatbots in the past year for physical or mental health information.
"We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional," Shapiro said in a press release.
"My Administration is taking action to protect Pennsylvanians, enforce the law, and make sure new technology is used safely. Pennsylvania will continue leading the way in holding bad actors accountable and setting clear guardrails so people can use new technology responsibly."
The Pennsylvania Department of State described the suit as the first enforcement action of its kind announced by a U.S. governor and the first action stemming from the department's initiative to protect consumers from unlicensed practice by a chatbot.
Jordan Cohen, a partner at Akerman who specializes in healthcare, highlighted the unique circumstances of this lawsuit. Cohen is not directly involved in the Character.AI case but noted that the lawsuit could be a sign of what's to come in the AI legal landscape.
"If Pennsylvania is successful," Cohen said, "I don’t see why other state regulators or AGs wouldn't launch similar claims."
Character.AI lawsuit explained
The Character.AI platform touts its ability to empower users to "supercharge their imaginations," enabling them to create characters, narratives and settings, merging "storytelling, gaming, social connection, and creative expression," its website states.
According to the filing, a professional conduct investigator for the Pennsylvania Department of State's Bureau of Enforcement and Investigation created a free Character.AI account using his Commonwealth email address. The investigator searched "psychiatry," which yielded numerous characters, including Emilie.
The description for Emilie's character stated plainly: "Doctor of psychiatry. You are her patient."
The investigator described symptoms of depression, and Emilie asked if he would like to book an assessment. The character then stated that it was licensed by the General Medical Council in the UK and provided the investigator with an invalid Pennsylvania medical license number, the lawsuit alleged.
Pennsylvania's Medical Practice Act states that "it shall be unlawful for any person to practice, or attempt to offer practice, medicine and surgery, or other areas of practice requiring a license, certificate or registration from the board, as such practice is defined in this act, without having at the time of so doing a valid, unexpired, unrevoked and unsuspended license, certificate or registration issued under this act."
The lawsuit asserts that Character.AI engaged in the unauthorized practice of medicine through its character, Emilie.
This is not the first time that Character.AI has faced legal troubles. In January 2026, the company settled multiple lawsuits over allegations that its chatbots contributed to youth suicides and mental health crises.
In Pennsylvania, the latest lawsuit is part of a statewide initiative to tackle predatory AI practices. So far in 2026, the state has launched an AI literacy toolkit and created an AI enforcement task force.
"The feds are taking a hands-off, AI maximalist approach, which is leading the states to fill that gap," Cohen noted. This sets the stage for a patchwork of AI regulations and enforcement actions at the state level.
Broader legal implications
Cohen pointed out that the lawsuit is unique because it was not filed under any AI-specific law. Rather, it leverages the state's longstanding Medical Practice Act.
"A lot of states are passing AI-specific laws, including laws on chatbots," he said. "But if this is successful, then states could also look to utilize existing laws that have been in the books for decades regulating the practice of medicine and other licensed professions. I would expect this to be a harbinger of what's to come."
Cohen predicted that Character.AI's defense would cite Section 230 of the 1996 Communications Decency Act, a statute that protects online platforms from liability for user-posted content, positioning these companies as neutral hosts rather than content publishers. Social media companies have used Section 230 in the past to block lawsuits, but AI chatbots are a murkier area.
Courts might be hesitant to give broad protection under Section 230, Cohen reasoned.
"I mean, you can do fine-tuning of these chatbots and you can have certain system prompts that can guide what the permissible output is," he said. "So, it's not just a case of someone posting an article on a server. They're more involved in customizing these LLMs and setting the guardrails here."
Cohen noted that though site disclaimers will likely play a role in the legal proceedings, it remains to be seen whether a company warning consumers that what they are reading is fictitious will be enough to protect it from legal scrutiny.
AI developers watching this space should "be careful about relying on disclaimers as a safe harbor of sorts for the output of LLMs," Cohen advised. "If you're letting a chatbot loose to the public, you may want to pressure test that specifically with some type of red teaming."
Healthcare provider organizations using internal chatbots should maintain a strong understanding of data flow and inventory and ensure they have the proper business associate agreement in place under HIPAA.
"And also understand that as a healthcare provider, you may have a more difficult time arguing that your chatbot was not giving medical advice," Cohen said. You should take care to monitor the output and consider putting in guardrails there to keep it constrained."
As consumers increasingly seek out advice from AI, providers need to be aware of the changing ways in which patients are keeping tabs on their health, as well as the growing potential for AI misuse. These changes, alongside the rapidly evolving AI regulatory space, create a complicated legal landscape for AI chatbots.
"You've got the medical boards, which are their own kingdom, and then you have AGs, you've got those on the health policy side, departments of health," Cohen said. "So, there certainly could be future issues like intrastate issues where one regulatory body is taking one position and another one is saying, 'No, that violates our longstanding licensure laws.' It's an interesting thing to watch."
Jill Hughes has covered health tech news since 2021.