Anthropic investigates report of rogue access to hack-enabling Mythos AI
Anthropic, a US-based artificial intelligence developer, is investigating reports that unauthorized users gained access to its Mythos AI model, which is designed to identify cybersecurity vulnerabilities. The company confirmed the breach occurred through a third-party vendor environment and involved a small group of individuals who accessed the model via a private online forum. Mythos has not been publicly released due to concerns about its potential misuse in enabling cyber-attacks. At the time of the incident, Anthropic was distributing the model to a limited number of corporate partners, including Apple and Goldman Sachs, for testing. According to Bloomberg, the individuals who accessed Mythos did so through credentials linked to a third-party contractor working with Anthropic. While the group reportedly did not use the model to conduct cyber-attacks and appeared more interested in experimenting with the technology, the incident raises significant security concerns. Mythos is notable for its advanced capabilities in automating the detection of IT system weaknesses and carrying out complex cyber-attack simulations, tasks that typically require days of human effort. This has alarmed cybersecurity experts and government officials, who warn that such technology could be exploited by malicious actors if it falls into the wrong hands. The UK’s AI minister, Kanishka Narayan, emphasized the risks Mythos poses to businesses, highlighting the model’s ability to identify flaws that hackers could exploit. The UK’s AI Security Institute (AISI), which vetted Mythos, described it as a significant escalation in AI-driven cyber threats. Mythos was the first AI to successfully complete a 32-step simulated cyber-attack created by AISI, demonstrating its capacity to perform multi-stage attacks autonomously. This incident underscores the challenges of controlling access to powerful AI tools and the urgent need for robust safeguards to prevent misuse in the cybersecurity domain. The breach has drawn attention from regulators and industry leaders concerned about the balance between innovation and security in AI development. As AI models like Mythos become more capable, the potential for their exploitation in cybercrime grows, prompting calls for stricter oversight and collaboration between AI developers, governments, and cybersecurity experts to mitigate emerging threats.
Original story by The Guardian Tech UK • View original source
Anonymous Discussion
Real voices. Real opinions. No censorship. Resets in 14 hours.
About NewsBin
Freedom of speech first. Anonymous discussion on today's news. All content resets every 24 hours.
No accounts. No tracking. No censorship. Just honest conversation.
Loading comments...