Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work
A design flaw in ChatGPT’s “Share” function accidentally made private conversations publicly accessible and searchable by search engines, exposing deeply personal and problematic user interactions. Digital Digging investigator Henk van Ess reported that the share feature created public pages instead of private links, allowing search engines to archive conversations that users believed were confidential. OpenAI has since removed the public sharing capability and is working to get indexed results removed, but many conversations remain accessible through archives. The exposed chats revealed troubling content including an Italian user seeking advice on displacing Amazon indigenous communities for minimal compensation, a lawyer accidentally preparing defense for the wrong side of litigation, domestic violence victims planning escapes, and an Arabic speaker critiquing the Egyptian government. While OpenAI called this a “short-lived experiment to help people discover useful conversations,” the incident highlights how users treat AI chatbots as confidants, sharing sensitive information they never expected to become public, creating potential safety risks for vulnerable users in authoritarian regimes or dangerous situations.