UPDATED 22:49 EST / JANUARY 26 2026

AI

Anthropic CEO warns humanity may not be ready for advanced AI

Anthropic PBC Chief Executive Dario Amodei today released an essay on the many risks associated with developing powerful artificial intelligence systems and how we might counteract them.

“I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species,” Amodei (pictured) wrote in the introduction to his 38-page essay. “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”

He’s no fan of AI “doomerism,” which he believes hit peak levels between 2023 and 2024 and was often steeped in exaggerated language “reminiscent of religion or science fiction.” But the more recent shift to a narrative more focused on AI opportunity, he says, ignores many of the threats AI will surely pose in the coming years.

“This vacillation is unfortunate, as the technology itself doesn’t care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023,” he wrote. He believes we must be surgical in addressing the risks, which will fall on companies, third-party actors and governments. The latter, he said, must be “judicious,” ensuring regulations don’t hamstring economic opportunity.

He explains what he means by “powerful AI” – machines that are smarter than Nobel Prize winners, that can solve complex mathematical problems, write the great American novel, access online information, and with it, perform any number of actions, give directions, advise, create videos and direct experiments. It will perform these tasks with “a skill exceeding that of the most capable humans in the world.”

He’s not sure when we will achieve this technological feat – maybe we’re one or two years away, maybe longer, he writes, calling it possibly “the single most serious national security threat we’ve faced in a century, possibly ever.”

Much of the essay is his risk assessment, starting with AI “autonomy” – reckless AI, possibly deceiving, unstable. To counteract this, he believes advanced AI must be developed with a “constitution,” referring to Anthropic’s models, which he says are built with a set of “values and principles that the model reads and keeps in mind when completing every training task.”

Problems, he says, must be diagnosed and models must be constantly monitored while companies share their findings publicly and the risks are legislated for: “Anthropic’s view has been that the right place to start is with transparency legislation, which essentially tries to require that every frontier AI company engage in the transparency practices.”

Once we can be sure “AI geniuses” will not “go rogue and overpower humanity,” he wrote, we need to focus on the misuse of AI by humans. Having a world full of individuals with a “superintelligent genius in their pocket,” could be problematic, he says. He’s particularly fearful of the ability to develop biological weapons.

“We believe that models are likely now approaching the point where, without safeguards, they could be useful in enabling someone with a STEM degree but not specifically a biology degree to go through the whole process of producing a bioweapon,” he wrote.

His solutions are much the same as they are for AI autonomy risks: a constitution, transparency, diagnosis, monitoring and legislation, but with a focus on international agreements.

Rogue states, he accepts, may be more difficult to manage. He imagines a “swarm of millions or billions of fully automated armed drones, locally controlled by powerful AI and strategically coordinated across the world by an even more powerful AI.” With this may come Orwellian AI surveillance and propaganda, genius AI that will develop geopolitical strategy for undemocratic countries.

As a defense, he believes not helping such authoritarian nations develop AI is a good starting place. “China is several years behind the U.S. in their ability to produce frontier chips in quantity, and the critical period for building the country of geniuses in a datacenter is very likely to be within those next several years,” he writes, adding that “there is no reason to give a giant boost to their AI industry during this critical period.”

Within democracies, he says, there is also potential for abuse of powerful AI. He supports “civil liberties-focused legislation” to counter such abuse, as well as a very cautious approach to developing autonomous weapons or surveillance technology. “The only constant is that we must seek accountability, norms, and guardrails for everyone, even as we empower ‘good’ actors to keep ‘bad’ actors in check,” he wrote.

Amodei calls economic growth a “double-edged sword,” wherein economies will expand along with “labor market displacement, and concentration of economic power.” He predicts that within one to five years, half of all entry-level white-collar jobs will be gone.

The solutions: AI companies should analyze how their models disrupt industries, while companies may have to “reassign employees… to stave off the need for layoffs.” Wealthy individuals, he believes, should do their bit to help with meaningful private philanthropy, while a “progressive taxation” on the winners may help counterbalance extreme levels of inequality.

“In the end, AI will be able to do everything, and we need to grapple with that,” he ended that particular section.

Once we’ve built these defenses against AI disruption, he believes that as humanity progresses, there will be “unknown unknowns.” We may greatly increase the human lifespan, he says, or large numbers of people will become afflicted with “AI psychosis.” AI might invent new religions, and with it, all the problems associated with religion, scenarios he compares to the dystopian TV show “Black Mirror.”

He also asks: Will humans even feel like they have a purpose in a world where AI does most of the work, a world where humans no longer flourish in the Aristotelian sense? For this and the other unknown unknowns, he offers no defense. These are rivers we will have cross when the time comes.

But as companies push forward to make possibly trillions of dollars, as governments seek to bolster geopolitical power and militaries find ever newer ways to exterminate their foes, Amodei talks about a “trap”: “AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all.”

He’s essentially optimistic we will overcome this, but he would say that: He’s one of the people who will reap the rewards as his company builds its massive user base. The solutions he proffers are commendable, but leaders of AI companies may not be the best ones to defer to.

Photo: Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.