UPDATED 09:30 EDT / JULY 24 2024

AI

AI trust startup Vijil raises $6M to prevent AI agents saying the wrong things

Artificial intelligence safety startup Vijil AI Inc. has closed on $6 million in seed funding as it launches its first cloud-based tools that promise to help companies build and deploy more reliable generative AI agents.

Today’s round was co-led by Mayfield LLC’s AIStart seed fund and Google LLC’s AI-focused seed fund, Gradient Ventures.

Vijil is focused on helping companies to deploy trustworthy AI agents such as chatbots and virtual assistants, with a focus on ensuring they’re in compliance with governance regulations. The startup explains that while AI agents have become one of the most successful use cases for generative AI, there are still a lot of issues with the technology that can cause serious headaches for the companies that deploy it.

For instance, there have been numerous incidents where AI agents have gone and recommended a competitor’s product or service, confabulated airline ticket refund policies and concocted legal cases that have no basis in reality.

The problems stem from the unreliability of the large language models that provide the foundation for AI agents. When put under atypical conditions, many LLMs start “hallucinating” and generating unwanted responses that can be seriously damaging for some companies. Some of the well known risks include generative AI models making egregious mistakes, asserting falsehoods, divulging confidential or personal information, toxic responses, creating malware and generating responses that are unethical, biased, unfair and even dangerous.

Vijil says enterprises cannot build and deploy trustworthy AI agents because they have no way to measure “trust” with any degree of accuracy. To do this, companies typically rely on external red-team consultants, use AI benchmarks or simply surrender to “vibe checks,” but such methods are clearly inadequate, especially when trying to deploy an AI agent at large scale.

“We cannot trust autonomous agents today, no matter how intelligent they may seem, the way we trust the people we employ,” said Vijil co-founder and Chief Executive Vin Sharma. “As humans, we have had 4 million years of genetic evolution and 400,000 years of cultural evolution to understand interpersonal trust. And we have metrics and mechanisms to measure and maintain that trust. But AI agents must earn our trust starting with a deficit.”

Vijil says its cloud-based platform, available in private preview via the Google Cloud Marketplace from today, provides an alternative to measuring AI agent’s trustworthiness. It does this by evaluating the behavior of AI agents through a series of automated tests that can be tailored to specific business contexts.

One advantage of Vijil’s platform is that it requires only a few small samples of data from each customer about its model’s usage, and from there it can create a comprehensive test suite to measure how it performs in almost any kind of scenario within that context. It then scores the model for its performance, reliability, privacy, security and safety.

Once the tests are complete, Vijli then helps customers to mitigate any of the risks found in its evaluations using a “defense-in-depth” strategy that puts several layers of safeguards in place. The first is a perimeter defense mechanism that’s able to detect malicious prompts and unsafe responses, and it can adaptively learn so as to improve the AI model’s compliance and safety.

Vijil says its platform is applicable to various generative AI systems, including open-source large language models, closed AI application programming interfaces, retrieval-augmented generation applications and AI agents.

Google Cloud Director of Product Management Manvinder Singh said his company is pleased to collaborate with Vijil to help its customers adapt its AI models for trust. “By adapting the Google Responsible Generative AI Toolkit to the needs of enterprises in various industries, Vijil provides critical capabilities for AI developers to preserve the privacy, security and safety of custom models downstream with the same rigor that went into their original release,” he said.

Image: SiliconANGLE/Microsoft Designer

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU