A Chinese law enforcement official used ChatGPT the way most people use a private notebook — to draft, revise, and polish status reports about their work. The problem: the work was a covert campaign to silence critics of the Chinese Communist Party living overseas. OpenAI's threat intelligence team read the reports, pieced together a transnational repression operation involving hundreds of operators, thousands of fake social media accounts, forged American court documents, and impersonation of United States immigration officials — then published the findings.
The revelation, disclosed in OpenAI's February 2026 threat report, is the most detailed window yet into how a state security apparatus industrializes online repression. It also marks a turning point in a two-year pattern: since February 2024, AI companies have disrupted over 40 state-linked influence networks, effectively becoming accidental counterintelligence platforms. The operational security failure — treating a foreign company's chatbot as a secure diary — exposed not just one campaign but the bureaucratic machinery behind it, including internal performance metrics and staffing levels across multiple Chinese provinces.