State-backed hacking groups and cybercriminals have moved beyond simple experimentation with artificial intelligence and are now integrating it into every stage of their attack chains, according to a new report released Thursday, February 12, 2026, by the Google Threat Intelligence Group (GTIG).
The report reveals that adversaries from China, Iran, North Korea, and Russia are actively using Google’s Gemini AI models to accelerate operations, ranging from initial reconnaissance and phishing to malware development and technical troubleshooting. While AI has not yet fully replaced human operators, Google warns that attackers are now “wiring” AI directly into their malicious tools to automate complex tasks and evade detection.
State-Sponsored Groups Leading the Charge
Google’s researchers identified specific government-backed groups that are leveraging generative AI to boost their productivity and effectiveness.
- North Korea (UNC2970): This group is using Gemini to synthesize open-source intelligence (OSINT) on targets in the defense and cybersecurity sectors. They profile high-value individuals by mapping technical job roles and salary information to create highly convincing phishing personas, often masquerading as corporate recruiters.
- Iran (APT42): Known for aggressive social engineering, this group uses Gemini to conduct “rapport-building phishing.” They generate detailed biographies and personas to establish trust with victims before delivering malicious payloads. They also use the tool to translate content and debug their own malicious code.
- China (APT31 & UNC795): These groups have been observed using “expert cybersecurity personas” to prompt Gemini for vulnerability analysis. In one case, APT31 directed the model to analyze specific vulnerabilities—such as SQL injection and Remote Code Execution (RCE)—against U.S.-based targets. UNC795 utilized the AI for troubleshooting code and researching technical capabilities for intrusions.
- Russia: The report notes that Russian actors, along with those from other nations, are using AI to generate political satire and propaganda, though these efforts have not yet produced “breakthrough” capabilities in information operations.
New Malware “Calls” AI for Code
One of the most significant findings in the report is the emergence of malware that makes direct API calls to AI models during an attack.
Google identified a malware family dubbed HONESTCUE, a downloader and launcher that sends a hard-coded prompt to the Gemini API. The AI responds by generating C# source code, which the malware then compiles and executes directly in the computer’s memory. This “fileless” approach helps the attackers avoid leaving artifacts on the victim’s hard drive, making traditional detection more difficult.
Another threat, COINBAIT, is a sophisticated phishing kit masquerading as a cryptocurrency exchange. Evidence suggests this kit was built using AI code generation tools like “Lovable AI.” The malware includes verbose logging messages that appear to be generated by Large Language Models (LLMs), allowing attackers to track their data theft in real time.
“ClickFix” and Model Theft
Cybercriminals are also abusing the public sharing features of AI platforms. In a tactic known as ClickFix, attackers generate realistic-looking instructions for fixing common computer issues—such as clearing disk space—and host them on trusted AI platforms using shareable links. When victims follow the instructions, they unknowingly copy and paste malicious commands into their system terminals, installing information-stealing malware like AMOS (ATOMIC) on macOS and Windows devices.
Beyond operational attacks, Google observed a rise in “distillation attacks” or model extraction. This involves adversaries sending massive volumes of queries—in one case, over 100,000 prompts—to a proprietary model like Gemini. By analyzing the model’s responses, attackers aim to “clone” its reasoning capabilities and logic to train their own systems without incurring the high costs of development.
Industry Reaction and Defense
Steve Miller, AI threat lead at GTIG, stated that while attackers are experimenting with new ways to bypass safeguards, Google is continuously updating its defenses. The company has disrupted campaigns by disabling accounts and assets associated with these actors.
However, not all experts are convinced of the severity. Dr. Ilia Kolochenko, CEO of ImmuniWeb, criticized the report as “poorly orchestrated PR,” arguing that while AI can automate simple processes, it has not yet become capable of executing a full “cyber kill chain” on its own. He also warned that Google’s awareness of this abuse could potentially expose the company to liability for damages caused by these AI-enabled attacks.
