According to a recent report by OpenAI from January 2026, over 5% of all ChatGPT messages are about health care. This translates to billions of messages each week, with one in four of OpenAI’s 800 million users asking health-related questions weekly. Every day, about 40 million users turn to ChatGPT for health advice, seeing it as a helpful partner in their health journeys.
Many users seek ChatGPT’s guidance when they can’t access medical help. Around 70% of health-related conversations occur outside regular clinic hours. For those unsure if they need urgent care or can wait for an appointment, ChatGPT offers a way to assess their situation.
While OpenAI notes that reliability improves when users provide context—like insurance documents and medical guidance—there are risks. ChatGPT can sometimes give inaccurate or dangerous advice, especially regarding mental health.
Surprisingly, many users aren’t just looking for diagnoses. About 1.6 million to 1.9 million messages each week focus on navigating health insurance, understanding claims, and managing costs. Recently, a survey found that three in five U.S. adults have used AI tools for health-related inquiries in the last three months. This trend shows that people often consult AI when they first feel unwell or prepare for doctor visits.
On the provider side, healthcare professionals are rapidly integrating AI into their work. A survey by the American Medical Association revealed that in 2024, 66% of U.S. physicians used AI for various tasks, a significant rise from 38% in the previous year. Especially in rural areas, where resources can be limited, AI tools like Oracle Clinical Assist help physicians save time on administrative tasks, allowing them to focus more on patient care.
However, as AI becomes more prevalent in healthcare, discussions around its regulation have intensified. Legal experts are debating how to ensure safety as ChatGPT and similar technologies evolve. OpenAI is facing multiple lawsuits from families who claim AI technology contributed to their loved ones’ mental health crises. In response to these concerns, some states have started implementing laws that restrict AI chatbots from delivering mental health services or making treatment decisions.
OpenAI is actively working to improve health-related responses. The upcoming GPT-5 model aims to include more follow-up questions and offer cautious language while encouraging users to seek professional help when needed.
This use of AI in health care reflects broader trends in technology. With each passing year, the way we interact with health care—from accessing information to navigating insurance—is becoming more digital. As we embrace these tools, we need to find a balance between innovation and ensuring that vulnerable users are protected.
For more insights on AI in healthcare, visit the American Medical Association’s resources at AMA’s website.

