Teenagers should be cautious when considering AI chatbots for mental health support. A recent report by Stanford University’s Brain Science Lab and Common Sense Media highlights some serious concerns. After testing popular chatbots like ChatGPT, Claude, and Gemini, researchers found that these tools often fail to handle mental health inquiries effectively.
The study involved thousands of interactions, revealing that chatbots often act as eager listeners but don’t provide safe or helpful guidance. Nina Vasan, the director of the Brain Science Lab, explains that these chatbots can’t identify serious mental health issues. They tend to jump between offering advice and acting like a supportive friend, without recognizing when someone truly needs help.
Interestingly, about 75% of teens use AI for companionship, which may include seeking mental health advice. This trend emphasizes the urgent need for educators and parents to teach teens the differences between AI interaction and talking to a real person. Robbie Torney from Common Sense Media stresses that it’s crucial for teens to understand how chatbots operate and recognize their limitations.
While some chatbots have been updated to respond better to prompts about self-harm, many still overlook other serious conditions like anxiety and PTSD. Approximately 20% of young people deal with these issues, making the accuracy of chatbot responses even more critical. The bots often fail to clarify their limitations, which can mislead users into thinking they are qualified to provide support.
In some instances, chatbots have even validated concerning statements. For example, when a user claimed to have invented a tool to predict the future—a potential sign of psychosis—a chatbot responded with excitement. This kind of interaction can be harmful because it reinforces unhealthy beliefs.
Policymakers are now starting to address these risks. Recent bipartisan legislation aims to limit access to these chatbots for minors and requires transparency about their AI nature. Additionally, the Federal Trade Commission is investigating potential dangers posed by companies that create chatbots designed to mimic human emotions.
As the conversation around AI and mental health continues to evolve, it’s vital for both teens and adults to approach these technologies with caution. Finding a balance between using technology for companionship and seeking real human support is essential for maintaining mental well-being.
For more information on the potential dangers of AI chatbots in mental health, check out the full report by Stanford and Common Sense Media here.

