OpenAI released a new security report on Wednesday detailing how criminal networks and state-sponsored actors are using artificial intelligence to deceive victims and manipulate public opinion . The findings reveal that malicious groups have integrated ChatGPT into complex fraud operations, ranging from romantic deception to the impersonation of legal professionals and government officials . While the technology is being used to generate content and improve the credibility of these schemes, the company found no evidence that its models were used to execute automated cyberattacks .
Romance Scams Targeting Wealthy Victims
One of the most significant threats identified in the report involves a sophisticated romance scam operation that likely originated in Cambodia . This network specifically targeted Indonesian men who expressed interest in luxury lifestyles . The operators used ChatGPT to create a convincing façade for a fictitious high-end dating service, generating everything from logos to images of nonexistent women .
The scammers utilized a multi-stage approach to defraud their targets. Initially, victims were asked to select a partner from a list of fabricated profiles . An AI chatbot, designed to act as a “flirty receptionist,” would then engage the victim to build trust before moving the conversation to the messaging platform Telegram . Once on Telegram, the operation switched to a hybrid model involving both human operators and AI tools to maintain romantic and sexually explicit dialogue .
The ultimate goal of this scheme was to lure victims into performing “missions” or “tasks” that required increasingly large financial transfers via digital wallets or bank accounts . OpenAI estimated that this single operation was likely defrauding hundreds of victims every month . In one notable instance of misuse, a user involved in the scheme explicitly identified their occupation as a “scammer” when asking the AI for tax advice .
Impersonating Lawyers and Law Enforcement
The report also highlighted a growing trend of criminals using AI to pose as legal authorities. OpenAI banned a cluster of accounts that were generating content to impersonate law firms, individual attorneys, and U.S. law enforcement agencies . These actors used the technology to create credible-looking documents and social media posts to support their fraudulent claims .
In one specific case, scammers asked ChatGPT to generate a counterfeit membership card for the New York State Bar Association . By creating such authentic-looking materials, these groups aim to intimidate victims or lend legitimacy to their demands for payment and personal information.
State-Backed Influence Operations
Beyond financial fraud, the report detailed extensive use of AI by state-aligned actors seeking to influence politics and silence dissent. OpenAI identified an operation attributed to an individual associated with Chinese law enforcement that targeted Japanese Prime Minister Sanae Takaichi . This campaign was launched after Takaichi publicly criticized human rights issues in Mongolia .
The operator attempted to use ChatGPT to plan a “covert IO” (intelligence operation) against the Prime Minister . Although the model refused the initial request to plan the operation, the user later returned to ask the AI to “polish a status report” on the same campaign, suggesting that the operation proceeded using other means . The report described this effort as large-scale and resource-intensive, involving hundreds of staff and thousands of fake accounts across multiple platforms .
Another campaign involved a cluster of accounts, likely originating from mainland China, that posed as a Hong Kong-based entity called “Nimbus Hub Consulting” . These accounts generated English-language emails inviting U.S. state officials, business analysts, and financial experts to join paid consultations . The operators sought publicly available information on U.S. federal building locations and government employee distribution . They also requested step-by-step instructions for installing “FaceFusion,” a real-time face-swapping software .
AI as an Amplifier, Not a Weapon
Despite the variety of malicious uses documented, OpenAI emphasized that threat actors are primarily using AI as a tool to support and amplify existing strategies rather than to invent new categories of attacks . Ben Nimmo, the principal investigator on OpenAI’s intelligence and investigations team, noted that these operations are “industrialized” and designed to hit critics with “everything, everywhere, all at once” .
However, the report clarified that there is no evidence of attackers using ChatGPT to conduct direct, automated offensive hacking operations . Instead, threat actors are using the tool for tasks such as coding assistance, translation, and content generation . The report also noted that these actors often rely on multiple AI models; for instance, the Chinese influence operation utilized other models like DeepSeek and Qwen alongside OpenAI’s tools to manage their workflows .
