

Syncsort Inc. hopes to bring the entry barrier to performing real-time analytics down a notch with a new iteration of its flagship data integration software that can hook up to an organization’s stream processing pipelines to simplify the handling of the information flowing inside. Facilitating that is newly added integration with Apache Kafka.
The open-source data broker has risen to popularity in recent years thanks to a combination of reliability and low-latency transfer capabilities that is ideal for handling real-time information that needs to be processed while it’s still fresh. Administrators can now carry out that work from the graphical interface of Syncsort’s software, which offers a number of conveniences over manual coding.
The updated version of DMX-h makes it possible to filter incoming information from Kafka, make modifications as necessary and push out the analysis-ready data to the appropriate application for processing. That application is often Apache Spark, which Syncsort has also plugged into its platform to let customers handle every aspect of their integration work from the same place.
Analysts are thus able to visually create data processing workflows in DMX-h’s interface and have the algorithms under the hood translate the high-level specifications into a low-level format that Spark can execute. That functionality also works with other platforms, making the code interoperable among the different components of an organization’s stream processing environment.
The enhancement builds on a open-source connector that Syncsort launched for Spark last month to simplify the analysis of data from IBM Corp.s System z mainframes. With a sizable portion of the world’s financial transaction still going through big iron, that information is potentially invaluable for financial service providers that are looking to adopt the speedy analytics engine.
THANK YOU