

The way in which incoming information is handled by the programs dealing with it has changed drastically since its beginnings 50 years ago. The Internet of Things (IoT) and the speed of how businesses are operating and interfacing today has made it critical that these applications keep up.
Giving an overview of how Spark is combining structured streaming and traditional data input, as well as where the platform is going in the future, were George Gilbert (@ggilbert41) and John Walls, cohosts of theCUBE, from the SiliconANGLE Media team, live from HPE Discover 2016.
One of the biggest challenges facing developers today is learning to work with structured streaming data and the sheer volume of data coming in, as well as more traditional data, at the same time. Spark is designed with this in mind and has the capability to scale itself to the data being received.
“One of the first things Spark did that was appealing to developers was they made it very easy to work with streams and traditional tables. Now, they’re making it so they’re exactly the same,” explained Gilbert.
In addition to helping programmers work seamlessly with a wide range and speed of data input, Spark is also designed with machine learning in mind. The program will actually train itself to deal with certain functions and scenarios as they arise. This takes a great deal of pressure off of developers who would normally have to continuously train to learn new nuances of the software.
“We are going to train people to be better with data science, but also be building better tools,” said Gilbert.
Watch the full interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of Spark Summit 2016.
THANK YOU