UPDATED 15:25 EDT / FEBRUARY 15 2017

BIG DATA

Mainframe revival: IBM refreshes legacy business with machine learning, Linux | #IBMML

Civilization rests firmly on the mainframe. These massive computers run banking systems, weave the financial webs that hold nations together and control infrastructure at every level. Yet, these beasts must also be modernized.

IBM Corp. believes that Linux and Apache Spark are key to linking mainframes with modern big data technology. “We recognized a few years back that Spark was going to be key to our platform,” said Barry Baker (pictured), vice president of offering management for z Systems and LinuxONE at IBM.

To aid companies in adopting IBM’s approach to Big Data, IBM has opened access to Watson’s machine learning capabilities.

To learn more about IBM’s work with Spark and Linux, Dave Vellante (@dvellante) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile live-streaming studio, visited the IBM Machine Learning Launch Event in New York. There, they spoke with Baker(*Disclosure below.)

The connection between big data and mainframes

The conversation opened as Baker described a use case for big data on a mainframe. The bulk of the data needs to be on the platform for it to make sense to run the workload there, he stated. While the data companies want to perform machine learning on is resident on the mainframe, there is other data out there. It’s about taking a filtered subset of that data and running analytics where it makes sense, he continued.

Spark and Linux play a strong role in making that happen. Linux is one of the fastest-growing workloads on the platform, Baker mentioned. “In just a few months we’ve been able to take the cloud-based IBM Watson offering and make it run because of our investment in Spark,” he added.

Modernizing mainframes is also a big part of what IBM is doing. “The very first step our clients take is moving toward standard APIs that allow assets to be exposed externally,” Baker explained. Then, the clients create mobile web applications to access those assets. It’s called “progressive modernization.” It’s not about replacing everything at once, he stated.

“We have a very strong point of view that says if this is data you can get value from, moving it off the platform is going to create problems for you,” Baker said.

For many use cases it makes sense to leave the data where it is and bring the analytics to the data. Many industries that use mainframes, like banking and financial organizations, are heavily regulated, Baker mentioned. As soon as they move the data off their platforms, those regulatory problems get much bigger, he explained.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of the IBM Machine Learning Launch Event 2017 NYC. (*Disclosure: TheCUBE is a media partner at the conference. Neither IBM nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo by SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU