

The recent Semafor report on Anthropic PBC’s refusal to allow its artificial intelligence models to be used for certain law enforcement surveillance tasks is a pivotal moment in the ongoing debate around AI, privacy and state power. Though the political clash between a White House eager to showcase “patriotic AI” and a startup rooted in “AI safety” makes for a dramatic headline, the deeper issue is how AI reshapes the very meaning of privacy and surveillance in the 21st century.
Concerns about privacy are not new. Since the early 2010s, public unease has grown around how personal data was collected, shared and exploited. The big data era, marked by cloud-based social media platforms harvesting user traces and political campaigns by weaponizing predictive analytics, gave rise to regulatory frameworks such as the European Union’s General Data Protection Regulation and California’s Consumer Privacy Act.
At its core, the debate then was about collection and use of personal data without consent: companies quietly aggregating personal data, targeting citizens with tailored ads, or nudging political behavior. The technology at issue was predictive AI, models built on historical data to forecast individual actions.
With generative AI, however, the privacy debate has entered a new phase.
Initial controversies focused on intellectual property: Were these models trained fairly, and with consent? Musicians, writers and other creators asked whether their work had been used without authorization. Next came questions about the injection of personal or proprietary data, whether interactions with systems like ChatGPT could be retained, misused or inadvertently exposed.
Anthropic’s refusal to allow its models to be used in surveillance marks a shift. This is no longer just about data collection or unauthorized training. It is about the efficacy of AI as a surveillance tool.
Large language models dramatically lower the cost of searching, categorizing, and drawing inferences from massive datasets. They can be tasked to profile individuals, generate speculative associations (“find people who might fit X or Y profile”), or detect patterns of speech that point to intent or dissent. Unlike traditional databases or keyword searches, these systems can answer nuanced, open-ended prompts, surfacing insights about citizens in ways that were previously infeasible.
The risk, then, is not simply that data is collected, but that AI makes generalized, sweeping surveillance both technically possible and operationally attractive.
Democratic legal systems are built on due process: Law enforcement must have grounds to investigate. Surveillance is meant to be targeted, not generalized. Allowing AI to conduct mass, speculative profiling would invert that principle, treating everyone as a potential suspect and granting AI the power to decide who deserves scrutiny.
By saying “no” to this use case, Anthropic has drawn a red line. It is asserting that there are domains where the risk of harm to civil liberties outweighs the potential utility. This is qualitatively different from earlier privacy debates. It is not about who owns the data or whether consent was given. It is about whether we should permit the automation of surveillance itself.
This raises another difficult question: How much should technology companies be able to control how their products are used, particularly once they are sold into government? Better yet, do they have a responsibility to ensure their products are used as intended? There is no easy answer. Enforcement of “terms of service” in highly sensitive contexts is notoriously difficult. A government agency may purchase access to an AI model and then apply it in ways that the provider cannot see or audit.
Google famously promoted the principle of “don’t be evil,” but when it began pursuing defense contracts it walked that principle back, eventually removing it from their code of conduct entirely by 2023. Employees rebelled, leading to protests and high-profile departures. But the episode did not produce clarity; rather, it showed how fraught the terrain really is.
Companies argue that customers, especially governments, must take responsibility for how they deploy tools. Stakeholders, including employees, regulators and the public, inevitably argue that vendors should be held accountable. Customers resent being told what they can or cannot build with a product, but when abuses come to light, it is almost always the vendor in the headlines.
The reality is that there is no neat resolution: Whichever path a company takes, fallout is inevitable. This tension, between control and autonomy, responsibility and liability, is precisely what makes Anthropic’s decision both so consequential and so contested.
The U.S. government is right to want AI leadership as a strategic advantage. But conflating national competitiveness with carte blanche for surveillance risks undermining the very democratic values America claims to defend. Corporate actors such as Anthropic are, in effect, filling a governance vacuum, making policy choices where regulators and lawmakers have yet to catch up.
The real challenge ahead is to establish publicly accountable frameworks that balance security needs with fundamental rights. Surveillance powered by AI will be more powerful, more scalable and more invisible than anything that came before. It has enormous potential when it comes to national security use cases. Yet without clear limits, it threatens to normalize perpetual, automated suspicion.
Anthropic’s stance may frustrate policymakers today, but it is a preview of the ethical choices that every AI company, government and society will have to confront. The question is not whether AI will be used in law enforcement. The questions are under what terms, with what oversight, and with what protections for the rights of citizens.
Emre Kazim, Ph.D., is co-founder and co-CEO of AI governance platform Holistic AI. He wrote this article for SiliconANGLE.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.