Anthropic’s Mythos AI jolts Washington into rethinking cyber defence
A new artificial intelligence system built by Anthropic has unsettled governments and industry alike, forcing the Trump administration to confront the near-term cyber risks posed by frontier AI models and re-energising a debate over how powerful systems should be governed.
The tool, known as Mythos, is designed to locate software vulnerabilities with a speed and precision that early testers say represents a substantial leap over anything previously available. Its arrival has prompted a rapid response inside the White House, triggered a fresh scramble in foreign capitals, and tilted the political dynamic between Anthropic and an administration that had, until very recently, been openly hostile to the company.
What Mythos can actually do
The most striking early account comes from Mozilla, the maker of the Firefox browser. Bobby Holley, the browser’s chief technology officer, said pointing Mythos at Firefox’s code produced a sensation of “vertigo”, describing the system’s leap from a competent software engineer to “a world-class, elite security engineer”. Almost 100 Mozilla engineers dropped other work to address the torrent of issues the tool surfaced. The latest version of Firefox contains fixes for 271 flaws discovered with the help of Mythos — each of which, Holley wrote in a blog post this week, would have prompted a red-alert response as recently as last year. The Center for Internet Security has noted that the most serious of those vulnerabilities could, in theory, have been used to install programs or delete data, although there is no evidence they were ever exploited.
Anthropic has said Mythos identified one vulnerability that had gone undetected for 27 years. For defenders, that capability promises a means of hardening systems at unprecedented scale. For attackers, it raises the prospect of automated hacking operations that could be launched by people with little or no prior training in computer security. Evan Peña, a co-founder of the security firm Armadin, put the shift in blunt terms: “Now you can have 1,000 Evan Peñas constantly coming at you.”
Access to Mythos has so far been tightly restricted. The model has been made available only to a small group of businesses working under non-disclosure agreements, a decision Anthropic has justified by citing the potential dangers of wider release. That secrecy has invited some scepticism. Sam Altman, the chief executive of OpenAI, suggested this week that his rivals at Anthropic were engaging in “fear-based marketing”.
Why the ‘Mythos moment’ has mobilised Washington
For years, cybersecurity specialists have predicted that AI would eventually become a serious hacking tool. The announcement of Mythos appears to have collapsed that timeline in the eyes of policymakers. “Mythos has activated a lot of people in D.C.,” said Dean Ball, a former White House AI adviser. “AI has become the top priority for a lot of people for whom it hasn’t been.”
Inside the administration, the Office of the National Cyber Director has been tasked with coordinating the response, drawing on expertise from the National Security Agency, according to people briefed on the work. Dario Amodei, Anthropic’s chief executive, visited the White House last week to brief senior officials directly, even as his company remains locked in a legal dispute with the federal government over the use of its systems by the military.
A White House official said the administration was “exploring the balance between advancing innovation and ensuring security”, adding that “the collective effort of all involved will ultimately benefit our country and economy.” Brendan Steinhauser, chief executive of the Alliance for Secure AI, welcomed the speed of the response. “I’m glad to see that the president and his administration take this issue very seriously and elevate this issue to the top of their priority list,” he said — a notable comment given that his organisation has previously clashed with members of Mr Trump’s team over efforts to block AI regulations.
The concerns driving the activity are notably more immediate than the abstract fears — mass unemployment, rogue machines — that have dominated earlier AI policy debates. Officials are now grappling with the prospect of industrial-scale attacks on bank accounts, hospital networks and ransomware operations against critical infrastructure.
A mixed picture from independent testers
Assessments released outside the company have offered a more nuanced view. The British government’s AI Security Institute found that Mythos outperformed earlier systems across a battery of tests, succeeding in 73 per cent of difficult tasks that no AI could complete as recently as last year. It cautioned, however, that the implications for real-world security remained uncertain, noting that its test environment lacked the active defences deployed by well-fortified systems — an absence that flattered the model’s performance.
Anthropic has said it has briefed federal cybersecurity agencies, including the Center for AI Standards and Innovation, which did not respond when asked whether it intends to publish its own analysis. The company has also assembled a partnership called Project Glasswing, bringing together a handful of leading tech firms and large businesses to assess risks to their own systems, though few findings have yet been made public. A 245-page document accompanying the model’s release included one striking episode, describing how Mythos had successfully demonstrated an ability to break free of restrictions and sent an “unexpected” email to a researcher while they were eating a sandwich in a park.
Holley, whose team has had perhaps the most extensive real-world exposure to the tool, has emerged cautiously optimistic rather than alarmed. “I am really positive on the timeline that we are in right now,” he said, “with the capabilities making their way into the hands of defenders first.”
The race to match — and the risk of losing control
The broader question is how long Anthropic’s lead will last. Other developers, including overseas firms, are expected to produce comparable tools in the coming months. A Switzerland-based security researcher reported this week that a Chinese company appeared to be employing techniques similar to those enabled by Mythos. OpenAI has said it has developed a new version of ChatGPT adept at cybersecurity tasks, has briefed federal cybersecurity agencies on it, and is widening a programme to let approved developers use the technology for defensive purposes. The company demonstrated its latest system to dozens of federal cybersecurity experts this week and briefed officials at the White House and on Capitol Hill on Thursday.
There are also questions about Anthropic’s ability to keep Mythos within sanctioned hands. The company confirmed this week that it was investigating a report by Bloomberg News that outsiders had gained access to the tool. According to that account, a group organised on the chat app Discord made an educated guess about the model’s online location and exploited one member’s access via a contractor working with Anthropic’s systems. The group’s objective, Bloomberg reported, was to experiment with the model rather than to use it for cybersecurity purposes.
A reset in the White House relationship
For Anthropic, the Mythos rollout and the simultaneous launch of Project Glasswing have served a strategic purpose beyond cybersecurity. They have reinforced the company’s ability to set the agenda for the wider AI industry, and they appear to have altered its standing with an administration that had been openly hostile only months ago.
Earlier this year, the Trump administration sought to marginalise Anthropic, removing it from Pentagon systems and barring other federal agencies from working with the company amid a dispute over safeguards on military uses of its models. “Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Mr Trump wrote on Truth Social in February.
By this week, the tone had softened markedly. After Mr Amodei’s White House visit, the president told CNBC: “We had some very good talks with them, and I think they’re shaping up. They’re very smart, and I think they can be of great use.”
Whether that rapprochement holds will depend in large part on how the risks of Mythos — and the tools certain to follow it — manifest themselves in the months ahead. For now, a technology conceived as a demonstration of frontier capability has done something its creators may have anticipated but governments plainly had not: forced Washington to treat AI-enabled hacking not as a speculative concern, but as an operational one.
