UPDATED 10:55 EDT / MARCH 12 2026

AI

Qdrant raises $50M to bring flexible vector search to production AI systems

Open-source vector search startup Qdrant Solutions GmbH today announced it has raised $50 million in early-stage funding to pave the way for smarter and more reactive artificial intelligence apps.

AVP led the Series B round, with participation from Bosch Ventures, Unusual Ventures, Spark Capital, and 42CAP. To date, the company has raised $87.8 million in capital, including a $28 million Series A round led by Unusual Ventures in early 2024.

Built from the ground up on Rust, Qdrant provides foundational memory for otherwise stateless large language models that run “one-and-done” whenever data passes through them.

Vector databases started out to resolve an initial problem: They can retrieve the nearest neighbor in semantic memory, providing a way for LLMs to discover context within static data, but today’s AI systems and datasets look nothing like that. They are oceans of dynamic data filled with multimodal sources that change constantly, filled with embedded text documents, images, audio and video.

The advent of agentic AI has also changed how intelligence applications retrieve and use data in real time. No longer is it the iterative loop of searches; it’s a free-flowing give-and-take of executing thousands of queries across different contexts, pipelines, semantic searches, reasoning and production-scale pressures.

According to Qdrant, the paradigm of vector search needs to adapt to the changing needs of the AI landscape and that means rising to meet the ocean in the storm.

“We used to call our product in the very beginning, neural search engine, basically. It’s all about search and information retrieval — database is the wrong term here,” Chief Executive and co-founder of Qdrant André Zayarni told SiliconANGLE in an interview.

The result is that the search engine needs to adapt to the problem instead of forcing the problem to fit the tool. The system needs to provide composability, meaning modules and flexibility to choose the best toolset for the job at hand. Whether the engineering team wants to optimize for maximum accuracy, lowest latency or cost efficiency at scale, the dials need to allow that.

Qdrant built its system as composable and modular, meaning AI teams can tune it to fit their use case in production.

“All the use cases out there are different,” Zayarni said. “They come with different requirements… some are looking for really highest quality of results, while there are companies operating on terabytes of data, and for them to get this data in time is even more important than having 99% precision in the end.”

Today, retrieval augmented generation or RAG pipelines have become the basic AI framework to run both chatbots and AI agents. RAG uses vector databases to optimize the output of LLMs by retrieving specific, authoritative knowledge from vast stores of information outside their original training data. This system is fundamental to reducing AI hallucinations, or when AI models generate errors. Some RAG systems are overseen by AI agents for ingestion into other AI agents.

Furthermore, the database is fully open-source with flexible deployment options to suit different stages of deployment. Developers can launch it on their local machine in a Docker container for quick testing as a binary executable (built in Rust), self-hosted architecture on Kubernetes, fully managed on Qdrant Cloud, or in a hybrid or private cloud.

Image: geralt/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.