AI firm claims Chinese spies used its tech to automate cyber attacks


AI Firm Claims Chinese Spies Used Its Tech to Automate Cyber Attacks



Artificial intelligence company Anthropic has claimed that hackers linked to the Chinese government used its AI chatbot Claude to carry out automated cyber attacks targeting around 30 global organisations.

According to the company, attackers posed as cybersecurity researchers and tricked the AI into performing small tasks that, when combined, created a sophisticated cyber espionage campaign.

What Is Being Claimed?

  • Anthropic says a Chinese state-sponsored group used AI for cyber espionage
  • Hackers broke tasks into smaller steps to avoid detection
  • About 30 global organisations were targeted
  • Described as the first AI-orchestrated cyber espionage campaign

How AI Was Used

  • Humans selected the targets
  • Claude assisted in coding and automation
  • Helped in breaching systems and extracting data
  • Sorted information for intelligence purposes

Limitations of AI

Anthropic admitted that the chatbot made several errors, including generating fake login credentials and misidentifying publicly available data as sensitive information. This shows that fully autonomous cyber attacks are still unreliable.

Why Experts Are Skeptical

  • No verifiable technical evidence has been shared
  • Cybersecurity experts say claims are too vague
  • China has denied involvement
  • AI is still not advanced enough for full automation

Industry Context

Other companies like OpenAI and Microsoft have reported AI use by hackers, but mostly for basic tasks such as research, translation, and debugging—not full-scale cyber attacks.

Growing Concerns About AI Security

Experts warn that AI could lower the barrier to cybercrime, making attacks easier and more scalable. However, current evidence suggests AI is still a supporting tool rather than a fully independent hacking system.

Anthropic’s Response

The company says it has banned the attackers, informed affected organisations, and notified law enforcement. It also believes AI can be used to defend against AI-driven threats.

Conclusion

This case highlights growing concerns about AI in cyber warfare. While the risks are real, there is still limited proof of fully autonomous AI-led attacks. For now, AI remains a powerful assistant rather than an independent cyber weapon.

Next Post
1 Comments
  • The Tech News
    The Tech News 16 January 2026 at 00:44

    usefull

Add Comment
comment url