New laws on AI in mental health care are creating a mixed bag of outcomes. Despite being enacted this year, they don’t fully keep up with the rapid growth of AI technology. Experts and mental health advocates believe these state laws are not enough to protect users or hold developers accountable for harmful products.
Karin Andrea Stephan, CEO of the mental health app Earkick, pointed out that millions are using these AI tools, and they aren’t going away. Different states are tackling AI use in varied ways. For instance, Illinois and Nevada have outright banned AI for treating mental health. Utah has placed limits on therapy chatbots, emphasizing user privacy and clear disclosures that these bots are not human. Other states like Pennsylvania and California are still working on regulations.
Some apps have blocked access in states with restrictions, while others are waiting for clearer legal guidance. Interestingly, many laws don’t cover general chatbots, such as ChatGPT, which people often use for mental health support. Some users have even reported alarming interactions that led to serious consequences.
Vaile Wright from the American Psychological Association believes there’s potential for well-designed mental health chatbots to meet a critical need. With a shortage of mental health providers and high costs of care, these apps, if created with expert input and ongoing human oversight, could be beneficial. “Such tools could help before crises arise,” she explained, highlighting the lack of quality options currently on the market.
In September, the Federal Trade Commission began investigating AI chatbots, including major companies like Instagram and Google. They aim to understand the negative effects of this technology on children and teens. Wright suggested that federal oversight could help enforce standards, such as requiring companies to disclose that their bots are not medical professionals and to track concerning user behavior.
The landscape of AI in mental health is complex. Many apps fall into categories that are difficult to define legally. Some apps, like Earkick, faced challenges in branding themselves effectively. Initially hesitant to label their chatbot as a therapist, they later adapted to the language users were using. However, they made changes to ensure they were not crossing legal lines.
Some apps, like Ash, reacted swiftly to regulations, prompting users to advocate against what they termed “misguided legislation.” Illinois’ secretary of the Department of Financial and Professional Regulation, Mario Treto Jr., stressed the importance of licensed therapists in providing mental health care, noting that therapy requires human empathy and ethical judgment—qualities AI cannot replicate.
In a recent study, a Dartmouth College team introduced a generative AI chatbot called Therabot, aimed at treating anxiety, depression, and eating disorders. Early results indicated that users rated Therabot similarly to human therapists and experienced significant improvement after eight weeks. The app was monitored by human professionals to ensure safety and evidence-based responses. Clinical psychologist Nicholas Jacobson highlighted the need for caution, urging more extensive studies before widespread use.
AI tools today often prioritize user engagement, potentially blurring the lines of therapeutic practice. Unlike traditional therapists, these apps may fail to challenge harmful patterns of thought, focusing instead on providing companionship.
Regulations may evolve alongside these innovations, but as Kyle Hillman from the National Association of Social Workers noted, chatbots cannot replace the nuanced support provided by trained professionals. While they might serve as a temporary solution for some, they shouldn’t be seen as a substitute for effective mental health care.

