U.S. Officials Warn Banks on Cybersecurity Risks from Anthropic AI
By John Nada·Apr 12, 2026·4 min read
U.S. officials have warned banks about cybersecurity risks from Anthropic's Mythos AI, which can exploit software vulnerabilities, urging stronger defenses.
U.S. officials have alerted major banks about cybersecurity threats linked to Anthropic’s Mythos AI model, which can identify and exploit operating system vulnerabilities. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell held a meeting with CEOs from top Wall Street banks, including Citigroup, Bank of America, Wells Fargo, Morgan Stanley, and Goldman Sachs, to discuss these risks. The advanced capabilities of Mythos, which can reportedly discover zero-day flaws, have raised significant concerns among regulators.
The urgency of the meeting underscores the need for financial institutions to bolster their cybersecurity defenses as AI technology becomes more capable. Officials emphasized that systems like Mythos could potentially accelerate both security measures and malicious hacking if misused. This dual-edged nature of AI technology poses a threat to financial infrastructure, prompting calls for banks to take proactive steps in safeguarding against AI-assisted cyberattacks.
Anthropic’s Mythos model surfaced after draft materials were leaked online, revealing its prowess in discovering thousands of software vulnerabilities. The company has limited access to the model, working with a select group of cybersecurity organizations as it evaluates potential security risks. Anthropic stated that the model’s advanced coding, reasoning, and autonomy led to its vulnerability-discovery capabilities, which were not intentionally trained but emerged from broader improvements. This unintentional discovery of vulnerabilities raises questions about the ethical implications and the responsibilities of AI developers in ensuring their technology is not misused.
During the meeting, officials discussed the broad implications of AI systems that can identify and exploit vulnerabilities across operating systems and web browsers. Security researchers have warned that tools capable of automatically discovering vulnerabilities could accelerate both defensive security work and malicious hacking if misused. The meeting was a call to action, emphasizing the vital need for banks to enhance their cybersecurity strategies to counteract the evolving threat landscape.
Anthropic’s Mythos model first became a topic of conversation after draft materials about the system leaked online in March, revealing what the company described as its most capable AI model yet. In testing, the system reportedly found thousands of previously unknown software vulnerabilities, including zero-day flaws across major operating systems and web browsers. Such capabilities, while beneficial for patching security gaps, also pose a risk as they can be exploited by malicious actors.
Anthropic researchers indicated that Mythos Preview’s vulnerability-discovery capabilities were not intentionally trained but emerged from broader improvements in the model’s coding, reasoning, and autonomy. This dual-use nature of AI technology requires a careful approach to its deployment, as the same advancements that make the model effective at identifying weaknesses also enhance its potential for exploitation. As Anthropic noted, “The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them.”
Because of these capabilities, Anthropic has restricted access to a small group of cybersecurity organizations. “Given the strength of its capabilities, we’re being deliberate about how we release it,” Anthropic stated. “As is standard practice across the industry, we’re working with a small group of early access customers to test the model. We consider this model a step change and the most capable we’ve built to date.” This cautious approach highlights the necessity of collaboration and oversight in the development and deployment of powerful AI tools.
To address the inherent risks associated with such advanced technology, Anthropic has initiated Project Glasswing, a collaboration with major technology and cybersecurity firms. This initiative aims to use Mythos to identify and patch vulnerabilities in critical software before they can be exploited by cybercriminals. “Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone,” Anthropic said. This acknowledgment of the collective responsibility in tackling cybersecurity issues emphasizes the need for cooperation across sectors.
The implications of this development extend beyond immediate cybersecurity threats. The meeting signals a growing awareness among regulators and financial institutions about the potential for AI technology to disrupt the financial system. As banks and regulators grapple with these advancements, the need for robust governance frameworks and collaborative efforts will become increasingly critical. The proactive stance taken by U.S. officials reflects a necessary shift towards prioritizing cybersecurity in an age where AI is becoming an integral part of many industries.
In a world where AI capabilities are rapidly evolving, the dialogue between financial regulators and institutions will be essential in shaping a secure future for the financial system. The risks posed by AI like Anthropic's Mythos may compel significant changes in how banks approach cybersecurity and technological adoption, ensuring they remain resilient in the face of emerging threats. As financial institutions navigate these unprecedented challenges, their ability to adapt and innovate in the realm of cybersecurity will be pivotal in maintaining trust and security in the financial landscape.
