

Artificial intelligence safety startup Virtue AI Inc. today announced that it has raised $30 million in funding to enhance its technology.
The company raised the capital over two rounds led by Lightspeed Venture Partners and Walden Catalyst Ventures. The venture capital firms were joined by more than a half-dozen other backers, including Lip-Bu Tan, Intel Corp.’s newly appointed chief executive.
A program without AI features usually generates the same response to a user request regardless of how many times the request is repeated. AI models, in contrast, might generate a different response each time. That unpredictability makes it difficult to forecast when an AI application may generate harmful output.
Virtue AI offers a trio of software products that can help enterprises ensure their AI applications are safe. The company counts Uber Technologies Inc. and Glean Technologies Inc., a well-funded enterprise AI startup, among its early customers.
Virtue AI’s first product is called VirtueRed. It can automatically perform red-teaming, or the task of testing an AI application for safety issues. The software includes more than 100 red-teaming algorithms that Virtue AI says cover more than 300 risk categories.
Some of VirtueRed’s algorithms are designed to measure AI applications’ susceptibility to prompt injection attacks. Those are malicious prompts that attempt to trick an AI into generating harmful output. According to Virtue AI, VirtueRed also detects cases when an application’s guardrails are overly strict and block routine requests from users.
In addition to scanning risks related to user input, the software checks for harmful AI output. It can detect cybersecurity flaws in AI-generated code and prompt responses that leak proprietary data. VirtueRed likewise identifies situations where a model’s output breaches regulations such as the European Union’s AI Act.
The software compiles its findings into an automatically generated report. There’s a summary that highlights the number and type of issues found by VirtueRed, as well as a set of recommendations on how to fix them.
Virtue AI’s second product is called VirtueGuard. Whereas VirtueRed is designed to facilitate safety evaluations during the AI development phase, VirtueGuard is geared toward protecting AI models in production. It’s a kind of firewall that can automatically block harmful AI output.
VirtueGuard works with not only text models but also image and video generators. According to Virtue AI, the component of the product that focuses on protecting text models is more than 30 times faster than Llama Guard 3, a popular open-source alternative. That means users of an AI model integrated with VirtueGuard receive prompt responses faster.
Rounding out Virtue AI’s product portfolio is VirtueAgent. According to the company, it provides safety-optimized AI agents that can perform tasks such as identifying the databases to which an employee account has access.
“We saw companies struggling with the same challenges repeatedly — subpar evaluation methods, inefficient guardrails and manual processes that created bottlenecks in AI deployment pipelines,” said co-founder and CEO Bo Li.
Axios reported that Virtue AI will use its newly raised funding to add 30 employees by year’s end. The new hires will join the company’s business development and engineering teams. Over the next 12 years, Virtue AI plans to roll out new features that will allow its tools to “protect most AI product layers.”
THANK YOU