UPDATED 16:59 EDT / MAY 03 2026

AI

The quiet erosion of agency in the age of AI

Enterprises are moving fast to embed artificial intelligence into everything from customer interactions to decision-making. The benefits are undeniable: speed, efficiency and scale.

The danger isn’t necessarily sudden or dramatic. It’s quieter, more gradual, invisible and easy to justify along the way: It’s the slow loss of agency inside the company.

A company loses agency with AI when humans stop setting direction, making judgments and owning outcomes, and instead become passive supervisors of systems that operate with increasing autonomy. No one announces this shift; it happens, one decision at a time.

As philosopher Marshall McLuhan famously observed, “We shape our tools and thereafter they shape us.”  That insight feels newly urgent.

Why challenging the system matters

I came from an academic background where questioning assumptions was second nature — where consensus was something to probe, not celebrate. That mindset has shaped how I hire and how I lead. I look for people who will push back, challenge my reasoning, even when it’s uncomfortable. Strong organizations aren’t built on alignment alone. They’re built on constructive disagreement.

But AI introduces a new dynamic.

What happens when the most persuasive voice in the room is a system — one that speaks with confidence, fluency and apparent completeness? And what if no one feels comfortable challenging it?

AI is becoming the enterprise environment

AI is no longer just a tool; it’s becoming the environment in which work occurs. Enterprises are reorganizing knowledge, workflows and communication to fit how AI systems operate.

Many are training models on internal data such as brand guidelines, operating procedures and historical decisions. The result is a system that, in some ways, knows the organization more intimately than any one employee.

That’s powerful. It boosts productivity. It’s also intimidating.

When a system appears to hold full context, questioning it can feel like second-guessing the organization itself.

From cognitive offloading to subtle dependence

This doesn’t begin with bad intentions. AI accelerates work, improves accuracy and reduces costs. Naturally, we rely on it. Humans have always offloaded thinking when it becomes easier to do so — trusting GPS instead of maps, using spreadsheets instead of calculators. These shifts are normal, even rational.

But AI is different.

It doesn’t just compute; it generates reasoning, language, recommendations and even makes decisions. No one consciously gives up judgment; it simply becomes easier not to exercise it. After all, judgment demands reflection and diligence, AI helps us bypass this cognitive load in seconds.

A friend once told me how she used an AI assistant to manage her new garden. It chose the plants, scheduled watering and even reminded her to prune. The results were consistent; the garden thrived. But when a rare pest appeared, she didn’t know what to do, not because she lacked time or intelligence, but because she had stopped noticing how things worked. She hadn’t sensed the rhythm of seasons, the feel of dry soil or why some leaves curled before rain.

The garden was healthy, but her sense of gardening had quietly vanished. She had become a caretaker of AI’s plan, not the garden itself.

Paradoxically, in trying to use AI as a tool, she had quietly become one, simply executing AI’s instructions. Like any muscle, judgment weakens when it isn’t used.

This isn’t new, but AI accelerates it

Organizations have always had a tendency to drift toward conformity, a challenge facing corporations everywhere. Employees align around narratives, reinforce shared assumptions and gradually lose external perspective.

AI doesn’t invent this dynamic; it industrializes it.

How agency erodes

Agency fades operationally, in ordinary ways:

  • Decision-making becomes defaulting. Leaders still own decisions but increasingly rely on system recommendations without interrogation.
  • Disagreement fades. When outputs are coherent and confident, pushing back requires effort and carries risk.
  • The organization adapts. Workflows and thinking begin to mirror how the system operates.
  • Judgment muscles weaken. People grow accustomed to accepting polished answers without questioning them.

Humans are drawn to confidence, fluency and completeness. AI delivers all three, at scale.

Strengthening agency through design

The goal isn’t to resist AI, it’s to ensure efficiency doesn’t quietly replace judgement. That’s the leadership challenge of the AI era. And it requires acknowledging something uncomfortable: human nature is part of the risk surface.

People will default. They’ll trust what looks authoritative. They’ll avoid friction. So, how do you design systems that account for that?

Making human intent durable

This is where governance becomes essential, not as compliance, but as structural protection for human agency.

In my work, we’re exploring this through what our company call Guardian Agents. They don’t just monitor AI systems. They encode human intent — policy, control and expectation — and enforce it continuously. They make organizational standards durable, even when humans aren’t directly involved.

Humans drift. Systems scale. Intent must therefore be defined, challenged, updated when appropriate, and enforced.

Agency doesn’t vanish. It’s actively designed, protected and maintained.

Designing for human nature

If AI amplifies human tendencies, leaders must build for those tendencies:

  • Design to empower disagreement and debate. Make challenge a feature of workflows, not a flaw.
  • Define decision rights. Clarify where human judgment overrides system logic.
  • Protect independent thinking. Reward questioning, debate and original ideas.
  • Train for awareness. Teach teams to spot when reliance becomes dependence.
  • Govern for resilience. Build oversight that reinforces human involvement prior to AI automation.

These aren’t safeguards against AI. They’re safeguards against human nature.

A leadership imperative

AI will shape how organizations think and operate, but whether it replaces judgment or strengthens it depends on leadership. The real competitive advantage will lie with companies that harness AI while preserving the ability to challenge it.

Because agency isn’t lost at one point in time, it’s surrendered gradually, every time we accept what seems easiest instead of asking why. In the end, we’ll all be measured by how much human judgment we preserve.

Emre Kazim is co-founder and co-chief executive officer of Holistic AI. He wrote this article for SiliconANGLE.

Image: SiliconANGLE/Gemini

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.