UPDATED 13:48 EDT / MARCH 08 2025

AI

AI’s existential risks: Separating hype from reality

In “2001: A Space Odyssey,” the HAL 9000 was built to be infallible. Instead, it became dangerously misaligned with its human operators: It prioritized its survival over the humans’ safety.

Though today’s artificial intelligence systems aren’t sentient, the concerns underlying HAL’s fictional rebellion — AI misalignment, loss of control and unintended consequences — are at the heart of today’s AI governance discussions.

Executives leading AI-driven enterprises recognize the immense potential of AI but are also asking tough questions: What risks should we be planning for? Are we prepared for unintended consequences? Just because we can do something with AI, does that mean we should?

Recently, I was asked by one of our customers to give a presentation on the existential risks of AI. By all counts, this customer has been a leader in the adoption of AI across their enterprise, and a strong proponent of AI governance from the very beginning of their journey. Many prominent figures, including Elon Musk, Geoffrey Hinton, Yuval Noah Harari, Nick Bostrom, Stuart Russell and even OpenAI Chief Executive Sam Altman have exalted the potential of AI on the one hand, only to warn of dire unintended consequences on the other.

To navigate this landscape, chief information officers need a pragmatic understanding of AI’s existential risks, the competing visions for its development, and the geopolitical forces shaping its trajectory.

Defining existential AI risk

While much of the AI risk conversation focuses on bias, privacy and economic disruption, existential risks are in a different category — scenarios where AI threatens human survival or drastically alters civilization. These risks aren’t just about AI surpassing human intelligence; they stem from systems optimizing for objectives that don’t align with human values, operating at scales beyond our control.

The risk spectrum

Near-term risks: Systemic failures in critical AI applications (for example, cybersecurity, finance and military) causing cascading global disruptions.

Mid-term risks: The emergence of artificial general intelligence with decision-making power that challenges human governance.

Long-term risks: Artificial superintelligence, or ASI, surpassing human control, potentially making irreversible decisions for civilization.

Though ASI remains theoretical, today’s CIOs are already grappling with AI models that are unpredictable, opaque and capable of self-directed learning, raising questions about governance and accountability.

The race to superintelligence

AI evolution is often framed in three stages:

Narrow AI: Task-specific systems (for example, chatbots and self-driving algorithms) that, while powerful, are prone to unintended consequences (for example, biased hiring algorithms and misinformation-spreading bots).

Artificial general intelligence: AI capable of reasoning across multiple domains at a human level. Some experts believe we could reach AGI within the next few decades.

Artificial superintelligence: AI that far exceeds human intelligence. The core concern? An ASI optimizing for the wrong goal could be catastrophic.

The critical challenge isn’t just intelligence, it’s control. Once AI can self-improve at an exponential rate, traditional oversight mechanisms may become obsolete.

Competing approaches to AI development

AI’s trajectory isn’t just a technological question, it’s a battleground of competing ideologies:

Monolithic AI: A single, all-encompassing AI trained on vast datasets to solve broad challenges — akin to centralized supercomputers such as Deep Thought from “The Hitchhiker’s Guide to the Galaxy.”

Swarm intelligence: A decentralized approach where many smaller AI agents work together, similar to ant colonies or the
Mind Flayer in “Stranger Things,” allowing for adaptive, resilient decision-making.

These technical choices are intertwined with geopolitical and corporate agendas.

AI’s geopolitical and ideological divide

The future of AI isn’t just being shaped in research labs, it’s playing out in boardrooms and government offices worldwide. Key factions include:

Accelerationists, or e/acc: Advocates for pushing AI development forward as fast as possible, believing innovation will solve emerging problems.

Effective altruists: Focused on AI safety and advocating for strict regulations to mitigate existential threats.

AI ethics advocates: Originally focused on bias, transparency and fairness, but now expanding their scope to longer-term risks.

AI governance experts: Pushing for regulatory guardrails, from the EU AI Act to international AI safety standards.

CIOs must assess where their organizations fall within this spectrum and prepare for an evolving regulatory landscape.

Why CIOs must act now

AI development isn’t slowing down. Trigger events such as economic crises, geopolitical conflicts, or regulatory shifts could accelerate adoption in ways enterprises aren’t ready for. The pandemic forced industries to embrace automation overnight. A future AI-driven disruption could be even more unpredictable.

What can CIOs do today?

* Invest in transparent and explainable AI: Black-box models increase risk exposure. Transparent systems enhance trust and compliance.

* Implement AI governance: Proactively evaluate AI deployments for risk and unintended consequences on an ongoing basis. Not only can AI governance protect organizations from risk, but it can help them speed AI deployments and eliminate barriers to innovation.

* Advocate for smart regulation: Overregulation could stifle innovation, but responsible governance is essential for long-term sustainability.

* Establish an AI “kill switch”: Ensure mechanisms exist to halt AI actions before they escalate beyond human control.

Final thoughts: A pragmatic approach to AI risk

The existential risks of AI are neither inevitable nor purely hypothetical. Though “2001: A Space Odyssey” presented a cautionary tale, real-world AI governance is in our hands. The decisions CIOs make today will determine whether AI remains a powerful ally or becomes a force we struggle to contain.

AI’s trajectory is still unwritten. Let’s ensure we shape it responsibly.

Adriano Koshiyama is co-founder and co-CEO of Holistic AI, creator of the AI Governance Platform that aims to empower enterprises to adopt and scale AI with confidence. Before founding Holistic AI, Adriano worked in AI research at Goldman Sachs and was at the Alan Turing Institute. He is an active member of the AI Risks and Accountability group at the OECD AI/GPAI, and an honorary research fellow in computer science at University College London, where he holds a Ph.D. in computer science. He wrote this article for SiliconANGLE.

Image: SiliconANGLE/Ideogram

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU