Database company Weviate speeds up AI development with flexible vector embeddings service
Dutch artificial intelligence database startup Weviate B.V. is looking to streamline the data vectorization process with a new feature that automatically transforms unstructured information into vector embeddings.
Announced today and available now, Weviate Embeddings is an open-source tool with pay-as-you-go pricing that promises to accelerate the process of preparing unstructured data for AI applications.
The Dutch startup is best known for its open-source vector database, which is geared to AI development. It’s designed to cater to AI’s enormous appetite for unstructured data, which is the essential oil that powers generative AI chatbots such as ChatGPT.
Weviate stores unstructured information as vector embeddings, which are mathematical structures that represent everything from documents to purchase logs, to images and audio files. By storing the data as vectors, it’s much easier for AI models to understand and process it.
That’s all very well, but users face a mountain to climb when it comes to preparing their datasets to be transformed into vector embeddings. In addition, there’s a need to transform the prompts users enter to query that data into embeddings as well.
Traditionally, developers use embedding services to perform this essential task of data vectorization, but these often become a bottleneck. The problem is that they impose restrictive rate limits on users, slowing down their applications. They also rely on remote application programming interface calls, further hurting performance, and they use proprietary models to lock developers into their ecosystems.
Weaviate Embeddings provides developers with an alternative that’s based on open-source models hosted in the Weviate Cloud. It eliminates the need to connect to a third-party embedding provider, while ensuring developers maintain full control of all of their embeddings. Moreover, they will be able to switch between different embedding models without having to manually reindex their data, the startup said.
The new service runs on graphics processing units and brings AI models closer to where the vector data is stored, ensuring low latency. And unlike others, Weviate says it doesn’t impose rate limits or caps on users. Moreover, the pay-as-you-go pricing keeps things simple.
Weaviate Embeddings is available now in preview on Weviate Cloud, but for now users are limited to just one embeddings model, Snowflake Inc.’s Arctic-Embed. However, the company assured users it will add support for many more models in the future, starting early next year.
Weviate Chief Executive Bob van Luijt said the goal is to help developers bring their AI models closer to the data they rely on.
“Weviate Embeddings makes it simple to build and manage AI-native applications,” he insisted. “For those who prefer a custom approach, our open-source database supports any way they want to work. It’s all about giving developers the freedom to choose what’s best for them.”
The launch of Weviate Embeddings is the latest in a string of innovations by the Dutch company. Earlier this year, it debuted an AI Workbench for developers consisting of a prebuilt recommender agent and various tools for queries, collections and data exploration. It also provides a selection of hot, warm and cold data storage tiers, so developers can better balance the costs of their AI applications with performance.
Image: SiliconANGLE/Perchance.org
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU