UPDATED 17:04 EDT / JUNE 07 2016

NEWS

Inundated with data? The key to integration | #SparkSummit

There’s nothing more overwhelming than being bombarded with multiple streams of data. In only a few years, we have gone from gigabytes to terabytes, and the information just keeps coming. As technology weaves itself into the fabric of everyday business, how can organizations most effectively process all the data being generated?

Amit Satoor, senior director of product and solution marketing at SAP SE, talked with John Walls and George Gilbert (@ggilbert41), cohosts of theCUBE, from the SiliconANGLE Media team, during the Spark Summit 2016 about data processing and how to make sense of it all.

Where does it all go?

With so much information being generated, many companies are wondering how to integrate it all. Apache Spark, an open-source data processing engine, acts as a framework that can help process Big Data prior to integration for enterprise use, according to Satoor. It’s all about two challenges: “The size of the data and what sort of process to use.”

Spark is like a bridge between the data and big enterprises. It can process information before it’s needed and make the integration a much smoother process. “You want to make sure the experience is seamless,” said Satoor.

For the nostalgic consumer or business that wants a feeling a familiarity, many of the integration processes remain the same even though the experience changes. As computing evolves and moves toward pattern-based learning, users will have a more interactive and useful experience. All the moving parts, such as Spark, SAP HANA and others, must be slowly integrated in order to create more efficient and effective data process.

From old to new

As the complexity of Spark increases, more data from the past can be integrated into current algorithms. It can “shine a light” on old research and development projects where data was previously going untouched. With usage patterns from the Internet of Things, inputs that already exist can be put to new uses. For example, sensors in airplanes across the globe can be used to gather information that will improve maintenance scheduling and safety regulations across all airlines, according to Satoor.

Technology is always changing and adapting, and so too must the data scientists and platforms companies use. Spark’s framework and variety of modules help to improve a user’s experience and produce new results from old problems, Satoor said.

“Spark is always evolving,” Satoor added, and it is that evolution that keeps everyone a step ahead.

Watch the full interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of Spark Summit 2016.

Photo by SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU