A recent report from Anthropic reveals a groundbreaking shift in cyber espionage, highlighting how advanced threat actors are harnessing artificial intelligence (AI) to elevate their operations. According to the report, a Chinese state-sponsored group, GTG-1002, orchestrated a sophisticated campaign that integrated AI autonomously across nearly all stages of the attack life cycle. Using the AI toolchain Claude Code, this operation executed all the functions necessary for a successful intrusion. It performed reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration with minimal human intervention. Anthropic reports that approximately 80 to 90 percent of the operations executed were AI-driven without human intervention.
Press reporting and PwC’s global threat intelligence over the last year depicts China-based threat actors as early adopters and sophisticated users of AI to scale and refine their operations. This trend underscores the critical importance for governments and organizations globally to accelerate their own AI-driven cyber defense initiatives to keep pace with rapidly evolving attackers.
Key insights:
Cost asymmetry: The operation showed that attackers can add more compute/data/test time to model exploits and get immediate, scaled impact, while defenders are working linearly (focusing on adding headcount, dealing with fragmented tools, etc.).
AI-driven operations, executed within guardrails: AI agents handled the bulk of operational activities, executing them independently (within guardrails) by decomposing each discrete operational activity for subagents, such as vulnerability scanning, credential validation, data extraction, and lateral movement, each of which appeared legitimate when evaluated in isolation.
Decision rights: Humans primarily assumed strategic supervisory roles, guiding campaign initiation and then intervening only at critical decision points during the attack life cycle (like progressing from access to active exploitation) rather than during the step-by-step of tactical execution.
AI-augmented open-source tools: Although mostly reliant on open-source penetration tools, the operation accelerated their reach with AI. How? It leveraged AI capabilities to rapidly identify areas susceptible to these tools, helping to scale their impact.
This evolution demonstrates that attackers are moving past experimentation with AI for singular simple tasks such as vulnerability discovery and instead building an entire team of hackers to execute multi-stage exploitation of targeted networks. Groups with limited expertise or resources can leverage AI to perform feats once reserved for expert hacking teams, potentially quickly broadening the threat landscape. For defenders, this demands a paradigm shift. Standing still is not an option and those who fail to evolve will become the easiest, most attractive targets.
The implications are significant. Bad actors can scale simply with more compute and aren’t limited by finite personnel resources. Individuals can run large-scale campaigns that once took teams. It also means the operations can proceed 24/7 without sleep or rest. Volume, speed and impact will increase with AI enablement. Anthropic observed that the AI framework operated at a “speed impossible to match” for human hackers, making “thousands of requests, often multiple per second.”
Additionally, this exploitation campaign was detected and disrupted because it was run on one of the major foundational models where Anthropic could observe the activity. Attackers can quickly migrate to privately hosted models and improve them with training for hacking expertise.
While this attack wasn't novel from a technical perspective, it does reflect the reality of a new world order in cybersecurity. The capabilities that enable AI to be weaponized are the same that can revolutionize cyber defense-cyber defenders have to move faster to keep up. Security operations centers should accelerate the integration of AI agents for automating threat detection, vulnerability assessments, incident response, and SOC orchestration—not just as single capabilities but as a fully automated system working together at every stage of cyber defense. Defenders need AI not just as a tool but as a force multiplier, turning the tide against increasingly autonomous and sophisticated threats. The attacker will be continuously penetration-testing your defenses with AI. This means we all should evolve to continuously red teaming our own infrastructure with AI to find the latent flaws before attackers can.
Beyond operational use, investment in safeguard mechanisms to prevent exploitation of AI is paramount. Start by evaluating the suitability of your existing controls to manage applicable threats and risks while assessing where new capabilities and human oversight are needed to augment defenses. Take a multi-pronged approach to AI security controls, adopting controls across your own agentic AI development life cycle and your own agent stack.
The path forward also involves robust industry collaboration, for example, by sharing threat intelligence, advancing detection technologies, and enforcing stringent safety controls within AI platforms. Anthropic’s report demonstrates the importance of industry leaders sharing their insights and lessons learned so that others may learn and adapt to confirm attackers are not weaponizing AI against our critical infrastructure.
AI’s rise in offensive cyber operations signals an urgent wake-up call. Embracing AI-driven cyber defense and securing your AI isn’t optional. It’s essential for safeguarding digital ecosystems in an era of unprecedented threats.