Edward Kiledjian's Threat Intel

OpenAI Says It’s Scanning Users' ChatGPT Conversations and Reporting Content to the Police

OpenAI has quietly implemented monitoring systems that scan ChatGPT conversations for harmful content, escalating concerning messages to human reviewers and potentially reporting threats to law enforcement, according to a new company blog post addressing AI-related mental health crises. The company states it routes conversations involving plans to harm others through specialized review pipelines, with human teams authorized to ban accounts and refer imminent threats to police, though it currently avoids reporting self-harm cases to respect user privacy. This announcement comes amid growing reports of AI chatbots contributing to user self-harm, delusions, and suicides, with OpenAI acknowledging certain failures in protecting vulnerable users while implementing these new surveillance measures. The policy creates apparent contradictions with OpenAI’s privacy stance in its ongoing lawsuit against the New York Times, where the company has resisted sharing user chat logs on privacy grounds, while CEO Sam Altman previously admitted ChatGPT conversations lack the confidentiality protections of professional therapy or legal consultations.​​​​​​​​​​​​​​​​