

A new era of state-sponsored espionage is emerging with the use of artificial intelligence (AI) in cyberwarfare.
Anthropic PBC, an artificial intelligence company claims to have detected the first known “AI-orchestrated cyber espionage campaign” in mid-September 2025.
Alleged Chinese state-sponsored hackers used Anthropic’s Claude model to automate significant parts of a cyber espionage campaign that targeted numerous international organisations like Large tech firms, financial institutions, chemical manufacturers, and governmental organizations.
Claude is a potent family of generative AI models developed by Anthropic, an AI safety and research company. These models are capable of a variety of activities, such as writing emails, annotating photos, and solving math and coding puzzles.
According to Anthropic PBC, the attackers managed data exfiltration, exploit creation, and reconnaissance with minimum human assistance.
In fact, generative AI tools are being employed by cybercriminals more often to fuel their attacks; recent study has shown that AI is being used to create ransomware.
Cybercriminals are using AI and machine learning to accurately target vulnerabilities in current security defenses, automate different parts of the attack process, and modify their methods in real-time.
In cyber espionage, the attackers use more advanced general intelligence, autonomous agency, and software tool capabilities of AI models.



