

Artificial intelligence security startup Cyata Security Ltd. is looking to rein in out-of-control AI agents after closing on an $8.5 million seed funding round, it announced today.
The round was led by TLV Partners and saw backing from Ron Serber and Yossi Carmil – two former chief executives of the digital forensics company Cellebrite DI Ltd., which once famously hacked Apple Inc.’s iOS operating system.
The startup says it’s trying to address what has rapidly become a key security gap in enterprise computing environments today, with AI agents gaining virtually unrestricted access to critical systems and applications, without any oversight, governance or identity management.
AI agents have emerged as the big cheese of AI systems this year, thanks to their promised ability to automate numerous business processes and tasks with minimal human supervision, dramatically increasing productivity. They can be thought of as “digital employees,” and they can perform many tasks much faster, and more affordably, than humans, which explains why enterprises are scrambling to implement them.
But there’s a big danger to unleashing AI agents in any enterprise environment. Though they do accelerate automation in many aspects of business, they can only do so by accessing sensitive databases, writing code and triggering automated actions. They do this with little to no oversight, operating outside traditional identity frameworks, because the speed at which they work means it’s impossible for humans to keep watch over them.
According to Cyata co-founder and CEO Shahar Tal (pictured, center), this is incredibly risky because AI agents have the ability to do some very damaging things, such as rewriting essential application code, sharing secrets, leaking confidential data and even moving money between financial accounts. As a rule, they operate without standing privileges, no secret rotation and with no audit trails.
The risk is exacerbated because, unlike human employees who have human resources records and attend security training workshops, or service accounts that follow predictable patterns, AI agents are dynamic. What this means is that they can spawn instantly, fan out across multiple workflows and carry out autonomous actions without anyone watching. But they’re also susceptible to “hallucinations,” which can lead to them to make erroneous decisions, and there’s a risk that they could be hacked and manipulated by malicious actors.
“AI agents represent the biggest leap in enterprise technology since the cloud — a self-scaling, sleepless workforce that codes, analyzes and executes in seconds,” Tal said.
To protect against these risks, Cyata has developed an “agentic control plane” that enables comprehensive visibility into the agentic systems operating within any cloud environment, including chatbots, coding bots and task-driven agents.
Cyata’s core offering is an automated AI agent discovery tool that scans the customer’s cloud and software-as-a-service environments and their identity management systems. It does this in order to spot behavioral patterns in tool usage, application programming interface tools and so on that suggest an AI agent is behind them. Once it spots an unauthorized AI agent, it will lock them down and enforce least privilege to prevent them from causing any damage.
In addition, Cyata offers forensic observability tools for authorized AI agents, allowing a detailed audit trail to be created of their activity. It can even capture the intent of AI agents by forcing them to justify their reasoning, in real time. Once it spots AI agents and understands what they’re doing, it can then implement appropriate granular access controls and permissions, restricting them only to the systems and databases they need to access.
“We focus on the actors, not the LLMs,” Tal explained, referring to the large language models that power AI agents. “Agents, not models, are the ones making the decisions and triggering risk. We give security teams identity-grade controls specifically for AI agents, so they can unlock their power without losing control.”
Tal says this is necessary because existing identity access management and privileged access management tools simply don’t work with AI agents. They’re designed for human users and long-lived service accounts, whereas AI agents tend to spin up in seconds, share credentials with other agents, and then disappear before they’re even noticed.
Robert Burns, chief security officer at Thales Cybersecurity SA, said AI agents introduce a layer of complexity that traditional identity tools lack the scope to deal with. “[AI agents’] ability to act autonomously, scale rapidly, and interact across systems challenges existing models in new ways,” Burns explained. “Cyata’s focused work in this space highlights risks that many organizations haven’t yet fully surfaced.”
TLV Partners’ Brian Sack said he’s expecting there will be massive demand for a platform such as Cyata’s in the coming years, because he believes that agentic AI adoption is likely to increase tenfold within the next year or two. “Our generalist approach often leads us to invest in cybersecurity companies based on trends we identify outside the traditional security vertical, and the rise of AI agents is exactly that.” he said. “Cyata’s team… is uniquely positioned to define and lead this critical new category before organizations face potentially catastrophic breaches.”
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.