Context.ai raises $3.5M to help companies build better products powered by LLMs
Context.ai, a product analytics company that provides a platform for understanding applications powered by artificial intelligence large language models, announced Wednesday it has raised $3.5 million in funding.
The round was co-led Alphabet Inc.’s venture capital investment arm Google Ventures and Tomasz Tunguz at Theory Ventures along with participation from 20SALES.
Large language models are a type of artificial intelligence algorithm that use large amounts of data that can recognize human speech, translate, predict and generate content. They can also reply with humanlike conversational speech, such as the highly popular conversational chatbot OpenAI LP’s ChatGPT does. Many businesses have incorporated LLMs into their applications to allow users to “talk” to their products and data.
Context.ai produces an analytics service that provides customers visibility into how LLMs perform when discussing topics, identify how their product is performing and help debug its operations through a rich understanding of user interactions.
“The current ecosystem of analytics products are built to count clicks,” said Context.ai co-founder and Chief Technology Officer Alex Gamble. “But as businesses add features powered by LLMs, text now becomes a primary interaction method for their users. Making sense of this mountain of unstructured words poses an entirely new technical challenge for businesses keen to understand user behavior.”
In order to help developers and businesses better understand their LLM product performance, Context.ai’s platform takes user transcripts of conversations with the AI and does topic clustering of topics and keywords to see what is most talked about. Doing this can help provide an analysis of what users want from the system to help tune it better and provide better support.
The platform also provides a sentiment analysis, identifying user satisfaction with answers for each topic. This gives customers an idea of how users are interacting with the product, what are their goals, how it’s meeting user needs and where it might be falling short in meeting those needs.
Product developers can also stay ahead of potential pitfalls in the LLM makeup, including how it could be introducing risks such as mishandling of sensitive topics or giving bad answers. For example, it could be going off the rails, giving false answers or fighting with customers. It can also reveal user retention and reveal changes in user reactions to recent modifications to LLM behavior.
“It’s hard to build a great product without understanding users and their needs,” said Henry Scott-Green, Context.ai’s co-founder and chief executive. “Context.ai helps companies understand user behavior and measure product performance, bringing crucial user understanding to developers of LLM-powered products.”
The platform is model-agnostic and supports a large number of different major foundational models allowing users to integrate its software development kit with whatever LLM they want to analyze.
The company said that it will use the investment to build out its engineering team in order to increase the number of features and tools it can provide for enterprise customers.
Context.ai counts a number of companies as customers of its service including the AI-agent service Cognosys, the weekly advice column Lenny’s Newsletter, AI-powered people discovery platform Juicebox and AI-powered charting solution ChartGPT.
Image: Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU