Setting up a data processing pipeline is a juggling act. What applications work with the backend? Can those applications work together? What about fitting it into existing infrastructure?
The best answer to those questions is a system that can reach across many applications and infrastructures, according to Kenneth Knowles (pictured), software engineer at Google Inc.
One such system is Apache Beam, an open-source, unified model for data processing workflows. “The truth is it’s extremely general,” said Knowles, who spoke to George Gilbert (@ggilbert41), co-host of theCUBE, SiliconANGLE’s mobile live streaming studio, at the Flink Forward 2017 event last week in San Francisco, California.
Knowles and Gilbert discussed Beam, Apache Flink and data processing solutions. (*Disclosure below.)
A unification of backends and languages
The genesis of Beam came from a combined code donation to Apache concerning Google Cloud Dataflow, Apache Spark and Flink, Knowles explained. These three efforts toward the same end pointed out the need for a unified model.
Beam’s usage profile puts it in direct competition with MapReduce, a component of the Apache Hadoop processing framework. Knowles confirmed that Beam is intended as a replacement for MapReduce. Anyone writing a MapReduce pipeline should benchmark it against a Beam pipeline, he suggested.
As for working with Flink, Beam still has a ways to go. All the backends are missing some pieces of the final model, Knowles confirmed. Beam is not trying to take the intersection of these backends, however. They’re working to add features for every system, he said.
“For myself, the goal is that nobody is going to be locked into a particular backend,” Knowles concluded.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of Flink Forward 2017. (*Disclosure: TheCUBE is a paid media partner at Flink Forward. The conference sponsor, data Artisans, does not have editorial oversight of content on theCUBE or SiliconANGLE.)