

Cybersecurity company Snyk Ltd. today announced the launch of the Snyk AI Trust Platform, an artificial intelligence-native agentic platform built to secure and govern software development in the AI era.
The company says the new AI Trust Platform has been designed to empower organizations to accelerate AI-driven innovation, mitigate business risk and secure agentic and generative AI.
At a time when more enterprise software engineers are embracing AI code assistants, companies need to deal with the fact that those code assistants can be inaccurate. According to a Georgetown University study, almost half of all AI-generated code is insecure. Added to the mix is that threat actors are also now using AI to deploy cyberattacks, including prompt injections and data poisoning, that dismantle code to infect product lines purposely.
The AI Trust Platform has been designed to address these challenges. Snyk defines AI Trust as the ability to develop fast and stay secure within a fully AI-enabled, agentic reality by reducing human effort, while also improving security policy and governance efficiency. That understanding unpins the new Snyk AI Trust Platform.
The platform introduces several innovations, including Snyk Assist, an AI-powered chat interface offering contextual guidance, next-step recommendations and security intelligence. Another feature called Snyk Agent further extends these capabilities by automating fixes and security actions throughout the development lifecycle, leveraging its testing engines.
Other parts of the offering include Snyk Guard, which provides real-time governance and adaptive policy enforcement, crucial for managing evolving AI risks. Complementing these capabilities is the Snyk AI Readiness Framework, which helps organizations assess and mature their secure AI development strategies over time.
“I’m confident that the Snyk AI Trust Platform will be a game-changer for global organizations looking to further invest in AI-driven development,” said Chief Technology Officer Danny Allan. “Autopilot didn’t replace the need for actual pilots, and in that same vein, we envision a world where AI augments developers, but never fully replaces them.”
Also launching from Snyk today are two new platform-supporting curated AI Trust environments. Snyk Labs is an innovation hub for researching, experimenting with and incubating the future of AI security, while Snyk Studio allows technology partners to collaborate with Snyk experts to build secure AI-native applications for mutual customers.
Snyk Labs is positioned as a go-to resource for cutting-edge technical demos, thought leadership and early insights into emerging threats and standards rapidly shaping the generative AI security landscape.
Initial research is focused on AI Security Posture Management. It includes an AI Bill of Materials analysis that provides visibility into where and how models are embedded in software, as well as how Snyk is building the industry’s first generative AI model risk registry that measures novel risks like model jailbreaking.
Snyk Studios is focused in its initial phase on partnering technology companies with current AI solutions to help mutual customers deploy AI securely. According to Snyk, with Snyk Studio, developers and technology providers can collaborate with its security experts to embed critical security context and controls into their AI-generated code and AI-powered workflows.
Core to Snyk Studios is its newly developed Model Context Protocol server. MCP provides a standardized and efficient way for AI models within technology partners’ solutions to understand and incorporate rich security context from Snyk, allowing for more streamlined implementations.
THANK YOU