Meta, the parent company of Facebook and Instagram, has resolved a security flaw that potentially exposed users’ private AI prompts and generated content to other users, the company confirmed on Wednesday.
The bug was discovered and privately disclosed by Indian security researcher Sandeep Hodkasia, founder of the cybersecurity firm AppSecure. Hodkasia told TechCrunch he had identified the flaw on December 26, 2024, and was later awarded $10,000 by Meta under its bug bounty programme.
Meta implemented a fix on January 24 this year and said it found no indication that the vulnerability had been exploited for malicious purposes.
The flaw, according to Hodkasia, stemmed from the way Meta AI — the company’s standalone chatbot application — handled user prompt editing. While inspecting browser traffic during the editing process, he found that each prompt and its AI-generated response were assigned a unique numerical identifier.
By modifying this identifier, Hodkasia was able to retrieve content that belonged to other users — an oversight suggesting Meta’s servers were not verifying whether a requestor was authorised to access the prompt and its output.
“The prompt numbers were easily guessable,” Hodkasia told TechCrunch, raising concerns that a bad actor could have written a simple script to scrape large volumes of sensitive user content in a short span of time.
Meta spokesperson Ryan Daniels confirmed the fix in a statement and reiterated that “no evidence of abuse” was found.
News of the bug comes as major tech firms continue to push aggressively into generative AI products, despite ongoing concerns around data privacy and platform security.
Meta AI, the company’s flagship chatbot launched earlier this year to rival OpenAI’s ChatGPT, had already faced scrutiny after several users mistakenly shared private conversations publicly. The latest revelation is likely to renew questions about the robustness of safeguards built into rapidly developing AI systems.
Experts have long warned that the pace of innovation in the AI sector has often outstripped the industry’s ability to deploy adequate security protocols. With Meta and others racing to dominate the emerging AI landscape, lapses such as this could have significant implications for user trust.


