ChatGPT Leaks Expose Private Chats in Google Search Console
Unexpected user prompts surface in Google Search Console, raising fresh concerns about AI privacy and data security.
Private conversations with ChatGPT—covering everything from personal relationship dilemmas to sensitive business discussions—have unexpectedly appeared in Google Search Console (GSC), a tool typically used by website owners to monitor how their sites are discovered via Google Search. This unusual exposure has sparked new concerns over how AI companies handle user data and what privacy users can truly expect.
Discovery and Early Reports
The incident first gained public attention in late 2025 when analytics consultant Jason Packer noticed bizarrely long and specific user input strings showing up as queries in Google Search Console reports. Unlike the usual short keywords or phrases, these suspiciously detailed queries appeared to be full ChatGPT prompts. Collaborating with web consultant Slobodan Manić, Packer documented his findings in a detailed blog post, which was later referenced by outlets such as Ars Technica and Quantable.com.
Their investigation revealed that hundreds of ChatGPT user prompts—sometimes spanning several sentences—were appearing in GSC data for some sites. These queries often included a specific URL pattern (”https://openai.com/index/chatgpt/“), which suggested an unusual technical linkage between ChatGPT sessions and public search query reports.
Unclear Causes and OpenAI’s Response
While the evidence pointed to ChatGPT prompts surfacing in GSC, the precise cause remains uncertain. Packer and Manić proposed that a technical bug, possibly related to a prompt input box or a now-deprecated public sharing feature, might have inadvertently sent user prompts to Google Search. They noted that the prompts appeared where sites ranked highly for keywords tokenized from the OpenAI URL, inviting further speculation on the underlying search integration.
Notably, there is no definitive public confirmation that OpenAI intentionally scraped Google Search with user prompts. Rather, the evidence suggests the data exposure may have resulted from an indexing or routing bug, or from an integration mishap connected to ChatGPT’s search features.
When approached for comment, OpenAI acknowledged that it was “aware” of the issue and stated it had “resolved a glitch that temporarily affected how a small number of search queries were routed.” Reports from Ars Technica and TechRadar indicate that OpenAI has since removed or modified the public chat sharing function after previous incidents where chats were indexed by Google Search without users’ explicit understanding.
Despite these reassurances, OpenAI has not disclosed the full scope of the incident or provided exact figures on affected users. The company emphasized that only “a small number” of queries were impacted, but users and privacy advocates remain concerned about broader data handling practices.
The Broader AI Privacy Problem
The ChatGPT-GSC leak underscores a core vulnerability for any tech-driven platform handling sensitive user data: the possibility that even private prompts can unintentionally become public due to technical flaws or unclear data-sharing boundaries. Unlike earlier cases where users had to opt-in to sharing, the recent GSC appearance happened without user intent or awareness.
For the hundreds of millions of people globally who use ChatGPT and other generative AI platforms, this event highlights a basic risk—users may not realize the extent to which their prompts are stored, analyzed, or potentially exposed in unexpected ways. Without transparent, user-friendly data privacy policies and robust safeguards, trust in AI platforms can quickly erode.
Looking forward, the incident raises several important questions for AI companies and the broader tech industry:
Are user prompts permanently stored, and how are they secured against leaks?
What technical safeguards exist to prevent accidental data exposure?
How promptly and transparently do companies communicate and remediate privacy flaws?
Until these questions are thoroughly addressed, users should exercise caution in sharing personal, sensitive, or confidential information with AI chatbots.
Conclusion
The surfacing of ChatGPT prompts in Google Search Console signals the need for tighter oversight, stronger corporate safeguards, and clearer user education regarding AI privacy. While OpenAI acted to patch the reported glitch, the core issue—a lack of transparency around how AI input data travels and may be used—remains a critical concern. Both users and companies have a stake in demanding a higher standard for privacy and accountability as AI becomes ever more integrated into our daily lives.


This raises real questions about how Google's search indexing interacts with AI platforms. The fact that private prompts could surface in GSC dat shows how interconnected these systems are becoming. If Google (GOOG) can inadvertently index sensitive user data from OpenAI, what other blind spots exist in the search infrastructure? Users definitely need more clarity on how their data flows between these platforms.