Meta recently fixed a serious security issue that let users of its AI chatbot access private prompts and responses from others. This problem was discovered by Sandeep Hodkasia, founder of AppSecure, who revealed it to TechCrunch. He documented the bug on December 26, 2024, and received a $10,000 bounty from Meta for his findings.
Hodkasia noticed the issue while looking into how users can edit their prompts to regenerate text and images. When users did this, Meta’s servers assigned a unique number to each prompt and its response. By analyzing network traffic, Hodkasia realized he could change that number to view someone else’s prompt and response.
This lack of authorization checks allowed for potential leaks of private user information. Hodkasia pointed out that these unique numbers were “easily guessable,” meaning that someone could rapidly change them and access others’ data using automated tools.
Meta addressed the bug on January 24, 2025, and confirmed to TechCrunch that they found no evidence of any exploitation of the flaw. Their spokesperson, Ryan Daniels, emphasized that the company took the issue seriously and rewarded Hodkasia for his responsible disclosure.
This incident highlights ongoing privacy challenges as big tech companies race to develop AI technologies. The launch of Meta AI’s stand-alone app faced its own hiccups earlier this year, when some users unintentionally shared what they thought were private interactions with the chatbot.
As AI tools become more common, security experts stress the need for robust privacy measures. A recent survey by the Pew Research Center revealed that 73% of Americans are concerned about how companies handle their data. These insights emphasize the importance of transparency and security in tech developments.
For further reading on data privacy issues, you can refer to the Pew Research Center’s findings.
Source link
Meta,Exclusive,cybersecurity,meta ai