On August 28, a college parking lot in Missouri turned chaotic when 17 cars suffered extensive vandalism in just 45 minutes. Windows shattered, mirrors broke, and damages piled up to tens of thousands of dollars.
After a month of investigation, police gathered evidence through shoe prints and security footage, but what led to charges against 19-year-old Ryan Schaefer was an unexpected source: conversations with ChatGPT. Schaefer reportedly confessed to the chatbot, asking, “how f**ked am I bro?.. What if I smashed the shit outta multiple cars?” This marks an unusual case of self-incrimination via AI technology.
Weeks later, ChatGPT appeared in another serious investigation. Jonathan Rinderknecht, 29, faced charges for allegedly igniting the Palisades Fire in California, which tragically destroyed homes and claimed 12 lives. His request to ChatGPT for images of a burning city raised eyebrows.
These incidents may be just the beginning. OpenAI’s CEO, Sam Altman, emphasizes that users lack legal protections when conversing with the chatbot, raising serious privacy concerns. He noted that many people, especially the younger crowd, use AI like a confidant or therapist, sharing deeply personal issues without realizing the risks involved.
A recent OpenAI study highlighted a surge in users seeking medical advice, shopping tips, and creative storytelling from AI. As these tools become more versatile, they’re also being misused. Some AI apps are marketed as virtual therapists but lack the safeguards typical of professional services.
Moreover, the wave of data shared with AI platforms raises alarms for both law enforcement and potential cybercriminals. For instance, a new AI-powered web browser coming from Perplexity had vulnerabilities that hackers could exploit, threatening user privacy.
Tech companies are eager to leverage this personal data. From December, Meta will utilize interactions with its AI to serve targeted ads, which many find intrusive. For example, if users chat about hiking, they might suddenly see ads for hiking gear or related content—whether they want them or not.
While such targeted marketing could seem harmless, there are troubling implications. Previous studies have shown how misleading ads can prey on vulnerable individuals. Users searching for financial assistance have faced predatory loan offers, while others have been targeted based on personal crises.
Despite the risks, AI’s capabilities continue to grow. Currently, over a billion people use standalone AI apps, often without recognizing the potential for exploitation. The idea that “if you’re not paying, you’re the product” has evolved in the AI landscape; users may now be more accurately viewed as prey.
Reflecting on this situation, Pieter Arntz from Malwarebytes argues that the tech industry must prioritize transparency and user control amidst growing privacy challenges. As conversations about AI’s role in our lives deepen, the ethical implications may soon push issues of privacy back into the spotlight.

