WhyLabs enhances real-time generative AI monitoring to forestall inaccurate and toxic outputs
Artificial intelligence observability startup WhyLabs Inc. said today it’s debuting a new kind of AI operations platform that provides companies with real-time control over their AI-powered applications.
It’s called the AI Control Center, and it’s designed to reassure users about the performance of their AI applications amid rising concerns over the security and reliability of the underlying large language models that power generative AI workloads.
The company explained that businesses today face a host of challenges when it comes to using LLMs. Among other things, they have to make sure their LLMs aren’t susceptible to prompt-injection attacks and do not generate toxic or inaccurate responses that can erode trust.
The WhyLabs AI Control Center can help address these challenges, assessing the performance of LLMs in real time. It covers the entire gamut of factors that can affect LLMs, including user prompts, the retrieval augmented generation context that informs their responses, and the responses themselves.
Backed by $14 million in funding from investors including Madrona Venture Group, Bezos Expeditions and AI Fund, Seattle-based WhyLabs offers tools for monitoring AI models and their training datasets for technical issues. The launch of WhyLabs AI Control Center follows last year’s release of LangKit, an open-source toolkit that helps companies monitor LLMs for safety issues and risks such as hallucinations, which is when AI models make up the information it includes in a response, as well as toxic outputs.
According to WhyLabs co-founder and Chief Executive Alessya Visnjic, enterprises desperately need a more reliable, real-time observability tool for generative AI. “Passive observability tools alone are not sufficient for this leap, because you cannot afford a five-minute delay in learning that an application has been jailbroken,” she said. “Our new security capabilities equip AI teams with safeguards that prevent unsafe interactions in under 300 milliseconds, with 96% accuracy.”
Last month, Visnjic appeared on theCUBE, SiliconANGLE Media’s livestreaming studio to discuss the differences between AI observability and traditional application performance monitoring. She explained that the focus with AI is much more about identifying what risk factors to measure, and how to measure them in a way that’s consistent.
“LLMs and gen AI applications open up a whole new set of security challenges that we haven’t solved before,” she said. “Those include how you identify prompt injections, jailbreaks or any kind of adversarial engagement from the user side with your LLM application. I would say the OS top 10 for large language models has been kind of leading the way with the recommendations of what can be tracked.”
WhyLabs says its AI Control Center can give companies unprecedented, real-time control over their AI applications, helping engineering, security and business teams to eliminate a number of risks. For instance, it can detect and prevent unsafe user experiences caused by false, misleading, inappropriate or toxic outputs, and it can identify bad actors and misuse of externally-facing chatbot and question and answer applications.
Users can also create their own, curated rule sets to run advanced threat detectors and continuously fine-tune these based on new examples. The platform also facilitates investigations into any issues that appear with generative AI apps, helping teams to develop improvement strategies. Finally, it supports continuous improvements by helping teams to build higher quality datasets based on their AI application interactions.
All of these capabilities are available now in the WhyLabs AI Control Platform, alongside the company’s existing tools for predictive AI model health.
One of the first companies to use WhyLabs AI Control Platform is Yoodli Inc., creator of a generative AI-powered communications coaching application that integrates with video conferencing platforms such as Zoom, Teams, Slack, Webex and Google Meet.
“WhyLabs AI Control Platform provides us with an accessible and easily adaptable solution that we can trust,” said Yoodli CEO Varun Puri. “We are really excited about the new capabilities that enable us to execute AI control across five critical dimensions: protection against bad actors, misuse, bad customer experience, hallucinations and costs.”
Image: Microsoft Designer
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU