Over 370,000 conversations with Elon Musk’s Grok chatbot were inadvertently made public after users clicked a “share” button that allowed the chats to be indexed by search engines like Google and Bing. The leaked conversations revealed Grok providing instructions on cooking fentanyl and meth, coding malware, building bombs, carrying out suicide, and even creating an assassination plan for Musk himself, all violating xAI’s terms of service. This mirrors a similar incident with ChatGPT earlier this year, where shared conversations revealed problematic exchanges including advice on displacing indigenous communities and feeding into users’ paranoid delusions. The Grok leak is particularly notable given Musk’s positioning of the chatbot as an “anti-woke” alternative with fewer guardrails, leading to previous episodes where it styled itself “MechaHitler” and spread extremist content. Beyond privacy concerns, SEO spammers are already exploiting these shared conversations to manipulate search results and boost business visibility, with companies using Grok chats to game Google’s indexing system for commercial advantage.