UPDATED 11:35 EDT / MAY 07 2026

AI

MongoDB announces platform enhancements for enterprise-ready AI production

Popular NoSQL-based database company MongoDB Inc. today announced a new set of capabilities during the company’s .Local conference in London, bringing together everything software and artificial intelligence engineers need to run agents in production into one platform. 

The company announced the availability of MongoDB 8.3, building on previous generations of the database software with superior performance aimed at the agentic AI era. To support this, MongoDB added enhanced embeddings and vector search capabilities, empowering the next generation of agent data retrieval. 

“The hardest part of running agents in production isn’t the model. It’s the data layer underneath it,” President and Chief Executive CJ Desai said. 

AI agents, especially at scale, depend heavily not just on how quickly they retrieve data, but on the amount of information context that the AI models can hold in “memory” at one time and read accurately. 

MongoDB 8.3 brings performance to enterprise AI

With the release of 8.3 today, MongoDB said the platform now delivers up to 45% more reads, 35% more writes, 15% more high-integrity transactions and 30% more complex operations over 8.0.  

To gain these benefits, developers and engineers do not need to change a single line of code or any part of their infrastructure. The company said MongoDB is prepared for enterprise production AI systems that require sub-100-millisecond retrieval, subsecond context updates and zero downtime. 

“We’ve also moved common data transformations into the database itself, so teams no longer have to maintain external pipelines just to feed their agents,” Chief Product Officer for Core Products Ben Cefalo said. 

Enhancing retrieval accuracy

To bolster these boosts in speed, the company released Automated Voyage Embeddings in MongoDB Vector Search, now in public preview. 

This update automatically produces vector embeddings, the mathematical representations of words, sentences, paragraphs and other chunks of data, so that AI agents can retrieve them swiftly. Embedding models convert information into vectors — arrays of numbers that represent how likely two words or documents relate to one another. 

This capability grows out of MongoDB’s acquisition of Voyage AI Inc. last year, an investment that is now paying dividends for the company. Normally, vector embeddings must be generated on demand, whenever text, images or other documents are imported by AI engineers or the business team. Automating this process takes a burden off the engineering team and lets them work on other processes. 

“When AI tools and agents produce a wrong answer, the instinct is to blame the model,” said Pablo Stern, chief product officer for AI and emerging products at MongoDB. “But the data platform is what enables the agent with the right context and memory to act correctly.” 

In addition to automating this process, MongoDB also released a memory system for JavaScript and TypeScript developers with LangGraph.js Long-Term Memory Store into general availability. Using this, coders writing in these languages can access agentic memory systems previously mostly only accessible to Python developers, powered by the company’s Atlas backend, with no need for an additional database. 

Secure cross-region connectivity

Highly regulated industries such as banks, healthcare and government agencies require specialized data residency and security deployments for both data at rest and in transit. 

Today, with Amazon Web Services Private Link, generally available, database traffic between MongoDB clusters in different AWS regions stays within the AWS private network. This remains the same irrespective of cross-region connectivity and is not exposed to the public internet. 

The company said this will allow database operations and engineering teams to build cross-region clusters that security teams can approve faster, with fewer exceptions. The vision of this service is to provide a platform with fewer tradeoffs between compliance, technology stack and global reach. 

 Image: SiliconAngle/Microsoft Designer

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.