

In an IT environment that has shifted from relatively simple to diverse, vast and complex, IBM Analytics works to unravel this complexity for its customers.
Paul Gillin, cohost of theCUBE, from the SiliconANGLE Media team, spoke with Sean Poulley, VP of product and offering management for IBM Analytics, during IBM Insight 2015 and asked if he saw Hadoop’s complexity as a barrier to customer experience.
“Without a doubt,” Poulley said. “When you look at Hadoop, it’s a series of 20 or more packages, sometimes integrated, sometimes not. That’s a degree of complexity in an of itself. Our idea was to create a standard set of Hadoop packages where a client could build analytical applications on Hadoop knowing that they could move it from one distribution to another. We also know that open source can be just as locked in and proprietary as any other company. That’s another form of complexity that’s holding our customers back.”
Regarding Apache Spark, Gilbert wondered how far into the future Spark would take us. Poulley would like to believe Spark will function for years to come. “There are lower latency means of streaming analytics, but they come with a higher degree of skill requirement,” said Poulley. “The value of Spark, to me, is it’s abstraction layer across the data sources. It allows you to build one common analytics layer or set of analytics that can interrogate the data wherever it happens to be.”
Watch the full video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of IBM Insight 2015. And join in on the conversation by CrowdChatting with theCUBE hosts.
THANK YOU