Anthropic Introduces Project Glasswing For AI-driven Cyber Defense

The initiative brings together some of the world’s largest technology companies to test a powerful unreleased AI model capable of uncovering hidden software vulnerabilities, raising both hopes for stronger defenses and concerns about how such tools could be misused

Anthropic has begun testing a powerful new cybersecurity system designed to uncover hidden flaws in widely used software, enlisting some of the world’s largest technology firms in a tightly controlled effort to probe its capabilities.

The initiative, known as Project Glasswing, brings together companies including Microsoft, Apple, Amazon Web Services, NVIDIA, Google and Cisco to evaluate an unreleased model called Claude Mythos Preview, which is designed to detect vulnerabilities that traditional tools often miss.

The system has not been made public. Anthropic said it was withholding the model because of the risk that it could be used to develop more sophisticated cyberattacks, limiting access to a group of what it called “launch partners.”

“As part of Project Glasswing, the launch partners listed above will use Mythos Preview as part of their defensive security work; Anthropic will share what we learn so the whole industry can benefit,” Anthropic said in its blogpost.

The company is also expanding access to 40 additional organizations responsible for maintaining critical digital infrastructure, allowing them to test the system against both proprietary and open source software. Anthropic said it would commit up to $100 million in usage credits and $4 million in direct support to open source security groups.

At the center of the effort is the model’s ability to identify so-called zero day vulnerabilities, flaws unknown to developers and therefore unpatched at the time they are discovered. In internal testing, Anthropic said the system uncovered weaknesses that had gone undetected for decades.

Among the most striking examples was a flaw in OpenBSD that had persisted for nearly 27 years. The vulnerability could allow a malicious actor to crash any machine running the operating system simply by connecting to it. The model also identified a 16-year-old flaw in FFmpeg, embedded in code that had been tested millions of times without revealing the issue.

It further detected clusters of vulnerabilities in the Linux kernel, the foundational software that underpins much of the world’s server infrastructure. According to Anthropic, these flaws could allow attackers to gain complete control of a system through ordinary user access.

The findings underscore both the promise and the risk of applying artificial intelligence to cybersecurity. Tools capable of identifying hidden weaknesses at scale could help secure critical systems, including those used in healthcare, transportation and energy. But the same capabilities could be repurposed to exploit those systems if they fall into the wrong hands.

“Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs,” the blog said.

Anthropic said it is also in discussions with officials in the United States government about the model’s potential uses, including both its defensive applications and its capacity to simulate offensive cyber operations.

The project reflects a broader shift in the cybersecurity landscape, where artificial intelligence is emerging as both a tool for defense and a source of new vulnerabilities. As companies race to deploy increasingly powerful systems, efforts like Glasswing suggest that some of the most advanced tools may remain behind closed doors, at least for now, shared only among a small circle of trusted institutions.

Get the latest new and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.

Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide


Discover more from Impact AI News

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Discover more from Impact AI News

Subscribe now to keep reading and get access to the full archive.

Continue reading