Hazelcast debuts new in-memory data processing platform
Hazelcast Inc. today introduced a new in-memory data processing platform that it says will enable companies to analyze historical and real-time information at the same time.
San Mateo, California-based Hazelcast is a data management startup backed by more than $60 million in venture funding. Its software is used by major enterprises such as JPMorgan Chase & Co., Walmart Inc. and Capital One Financial Corp., which is also an investor in the startup through its venture capital arm.
There is a growing number of applications in the enterprise that require the ability to process both real-time and historical information. For example, a manufacturer might have a software tool that monitors the status of its factory equipment using real-time sensor data and generates an alert when there’s a potential hardware failure. The same tool might have a feature that analyzes historical maintenance logs from the past 12 months to identify the most frequently recurring malfunctions.
To build such an application, the manufacturer would usually require two separate data management systems: one to process the real-time sensor data and one to analyze the maintenance logs from the past 12 months for recurring malfunctions. Hazelcast says its new Hazelcast Platform unifies the two workflows, enabling companies to build their data-crunching applications on a single platform.
The company says that the benefit is simpler application development. Instead of having to work with two separate data management systems, a company’s software engineers can write their applications using just one, which speeds up development. It potentially also simplifies application maintenance because there are fewer moving pieces to manage.
“While data continues to be an enterprise’s most valuable resource, it’s only useful if they can derive actionable insights in a timely manner,” said Hazelcast Chief Executive Officer Kelly Herrell. “The Hazelcast Platform represents a monumental step forward for the creation of real-time, intelligent applications that help enterprises capture value they otherwise would miss.”
Under the hood, the platform has two major components. The first is an in-memory database that stores the information a company is looking to process and carries out users’ analytics queries. The second is a stream processing engine responsible for ingesting real-time information.
The in-memory database can run queries quickly because it skips a time-consuming task normally involved in analyzing information. A traditional database stores records on a server’s storage drives and, when it’s time to process them, moves the records into the server’s memory so they may be analyzed. The Hazelcast Platform, in contrast, keeps records in memory from the outset, which avoids the delay of having to retrieve them from storage and thereby speeds up analytics.
That’s the same capability offered by the company’s previous flagship data management platform, IMDG. The newly announced Hazelcast Platform replaces IMDG and adds a number of new features.
One of the additions is the ability to keep data on storage for backup purposes. Storing data in memory speeds up processing, but it comes with a risk: random access memory is non-persistent, meaning that if the server malfunctions or there’s a power outage, the information it holds is deleted. Keeping a copy of the information on storage makes it easier to recover from outages.
Hazelcast has also added expanded support for the SQL query syntax that enables users to carry more types of computations on their data. Thanks to the update, it’s now possible to perform aggregations and sorts. An aggregation is a computation through which a database turns multiple pieces of information into a single piece of information, for example by calculating the average of five different products’ prices. A sort is an operation that rearranges the way records are organized in a database to facilitate easier processing.
The other major component of the Hazelcast Platform besides its in-memory database is a built-in stream processing engine responsible for ingesting real-time information. According to the startup, the engine can process more than a billion data points per second with latency of fewer than 26 milliseconds. It can also perform operations on the incoming data, for example filtering duplicate items, to ease analysis later on.
Image: Hazelcast
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU