Anthropic has sparked fears after revealing that it has developed an AI bot deemed too dangerous to release to the public.
The AI giant released a chilling statement warning that its new model, dubbed Claude Mythos, could be capable of unleashing crippling cyber–attacks in the wrong hands.
In a chilling analysis, the company admitted that its creation could easily hack into hospitals, electrical grids, power plants, and other pieces of critical infrastructure.
During testing, Anthropic says that Mythos ‘found thousands of high–severity vulnerabilities, including some in every major operating system and web browser.’
Some of these security weaknesses had gone unnoticed by human security researchers and hackers for decades, surviving millions of automated reviews.
These included attacks that allowed Mythos to crash computers just by connecting to them, seize control of machines, and hide its presence from defenders.