Root Signals raises $2.8M for its AI reliability monitoring platform
Root Signals Inc., a startup that helps companies monitor the reliability of their artificial intelligence models, has closed a $2.8 million investment to advance its growth plans.
The funding round was announced today. Root Signal, which maintains offices in Palo Alto and Helsinki, said that Angular Ventures led the round with participation from Business Finland.
Large language models sometimes generate inaccurate responses to user prompts. Such mistakes, which are known as hallucinations, can be challenging to avoid. As a result, companies are finding it difficult to apply LLMs to high-stake business tasks that have little room for error.
Root Signals is working to address the challenge. The company offers a cloud platform that can help developers find LLMs with high output quality, as well as ensure those LLMs maintain their reliability over time.
One of the reasons that hallucinations are difficult to mitigate is that LLMs sometimes answer the same prompt in different ways. This limits the effectiveness of traditional error detection methods, such as scripts configured to detect the presence of particular keywords in an LLM’s responses. While some of a model’s erroneous prompt responses may contain a certain keyword, others might not, which means certain incorrect answers won’t be spotted by the detection scripts.
Root Signals is using an approach called LLM-as-a-judge to overcome that limitation. The idea is to monitor the reliability of a language model’s output using another language model. Unlike a static script, an LLM can catch inaccuracies in an AI application’s model even if phrasing or other parameters vary across prompt responses.
“You have to be pedantic in instructing it, and then check its work in seven different ways – and then check again tomorrow,” said Root Signals co-founder and Chief Executive Officer Ari Heljakka. “We make this scalable with metrics that are understandable and easy to maintain in production.”
Root Signals’ platform includes a dashboard that developers can use to compare the accuracy of the LLMs they’re considering to use in a software project. To test the LLMs, the user must enter sample prompts that function as a kind of benchmark test. Root Signal automatically finds the LLM that answers the prompts with the highest accuracy and also displays related metrics such as inference costs.
After picking an LLM for a software project, developers can use more than 50 so-called evaluators to make sure the model retains its reliability in production. Evaluators are software workflows for detecting hallucinations that Root Signals ships with its platform. They monitor metrics such as the accuracy of an LLM’s answers and their relevance to the user’s query.
Users whose requirements aren’t met by the built-in evaluators can create their own. A bank developing a customer support chatbot, for instance, could create a workflow that ensures its AI doesn’t accidentally generate investment advice. A startup with a programming assistant could use Root Signals to prevent its AI from outputting proprietary, copyrighted code.
Root Signals says that its platform is used by AI startups, “incumbent industry players” and other customers. The company will invest its newly raised funding in sales and marketing initiatives to further grow its installed base. Root Signals also plans to enhance its platform with new features.
Image: Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU