× News Alerts AI News CyberSec News Let's Talk Local AI Bank Tech News Cyber Advisories Contact

Search Engines are Indexing Shared ChatGPT Chats

ChatGPT shared conversations are being indexed by major search engines, effectively turning private exchanges into publicly discoverable content accessible to millions of users worldwide. The issue first came to light through investigative reporting by Fast Company, which revealed that nearly 4,500 ChatGPT conversations were appearing in Google search results.

Search Engines are Indexing Shared ChatGPT Chats

Major search engines have indexed shared ChatGPT chats, making private exchanges publicly discoverable to millions globally. This issue was initially exposed through investigative reporting by Fast Company, which identified approximately 4,500 ChatGPT conversations within Google search results.

The discovery utilized a straightforward yet effective Google dorking technique, specifically querying `site:chatgpt.com/share` alongside targeted keywords. This fundamental OSINT methodology uncovered a wealth of ostensibly private dialogues, encompassing everything from routine inquiries about home renovations to highly sensitive discussions concerning mental health, addiction struggles, and traumatic experiences.

The alarming aspect of this revelation stems from the fact that users using ChatGPT's “Share” button probably anticipated these shared chats would remain confined to a select group of friends, colleagues, or family members. Instead, the content of the chats were indexed by leading global search engines, becoming publicly searchable content.

Introduced in May 2023, ChatGPT's sharing functionality enabled users to generate unique URLs for their conversational threads. Clicking the “Share” button permitted the creation of a public link, offering a critical option: a checkbox labeled “Make this chat discoverable,” which would facilitate its appearance in web searches. Although this process necessitated explicit user action, it became apparent that numerous users did not fully grasp the extensive ramifications of activating this setting.

The concept was uncomplicated: once designated as discoverable, search engine crawlers could index the content, treating it identically to any other public webpage. These shared links adhered to a consistent URL structure (`chatgpt.com/share/[unique-identifier]`), simplifying their discovery via specific search queries.

Open Source Intelligence (OSINT) researchers discovered that indexed ChatGPT conversations could offer insights into “exactly what your audience struggles with” and “questions they’re too embarrassed to ask publicly.” These dialogues provided genuine, unvarnished perspectives on human behavior, corporate strategies, and sensitive data often inaccessible through conventional OSINT techniques.

Cybersecurity experts observed that the exposed content included source code, proprietary business information, Personal Identifiable Information (PII), and even passwords embedded within code snippets. Furthermore, research from Cyberhaven Labs indicated that 5.6% of knowledge workers had utilized ChatGPT in a professional capacity, with 4.9% having submitted company data to the platform.

Acknowledging the gravity of these privacy implications, OpenAI promptly addressed the situation. On August 1, 2025, Dane Stuckey, the company's Chief Information Security Officer, confirmed the discontinuation of the discoverable feature, stating, “We just removed a feature from ChatGPT that allowed users to make their conversations discoverable by search engines, such as Google.” OpenAI described the feature as “a short-lived experiment to help people discover useful conversations,” yet conceded that it “introduced too many opportunities for folks to accidentally share things they didn’t intend to.” The company also pledged collaboration with search engines to delist previously indexed content.

This incident underscores a core challenge within the AI era: the divergence between user expectations and technological realities. Many users presume their interactions with AI chatbots are confidential, but functionalities such as sharing, data logging, and model training can inadvertently create avenues for data exposure.

While OpenAI has mitigated this particular vulnerability, the incident illuminates wider systemic concerns regarding data management, user consent, and the unforeseen repercussions of AI proliferation.

Subscribe for AI & Cybersecurity news and insights