The last few years have seen IBM Corp. apply its Watson artificial intelligence to a wide variety of areas ranging from speech recognition to drug research. But the company can’t address every single use case alone, a limitation that it’s now looking to remove.
IBM today unveiled a standalone implementation of the machine learning technology powering Watson that will enable organizations to adapt its capabilities for their specific requirements. The IBM Machine Learning platform aims to reduce the amount of effort it takes to develop, train and deploy a custom analytics model.
“We are telling clients that you can get the power of machine learning across any type of data, whether its data in a warehouse, a database, unstructured content, email you name it, we are bringing machine learning everywhere,” Rob Thomas, general manager of platform development at IBM Analytics, said in an interview today with theCUBE, SiliconANGLE Media Inc.’s mobile video studio. (* Disclosure below.)
One of the main selling points is a built-in recommendation engine that is designed to help data scientists more easily select algorithms for their projects. The mechanism works by evaluating what records a company wish to process, how fast results are needed and various other operational parameters.
It then compares these details against a library of analytics algorithms to find the one that is most suitable for the task at hand. If it’s not a perfect fit, which can often be the case given the complexity of artificial intelligence projects, IBM Machine Learning enables data scientists to tweak the formula as needed.
The company says that models produced using its platform can be applied in a wide range of applications. IBM Machine Learning works with “any” programming language and supports many of the industry’s most popular AI frameworks, including SparkML, a project that the technology giant is actively supporting. The company among others contributed a tool called SystemML that can help optimize algorithm performance.
Longtime industry analyst Dave Vellante, co-chief executive of SiliconANGLE Media, said that the new capability extends IBM’s 2015 vision in launching the z13 mainframe computer of bringing transaction and analytic workloads together to keep the mainframe relevant. “The next big challenge is taking this vision to a true hybrid cloud model across other platforms and IBM’s cloud,” he said.
IBM Machine Learning will first become available for the technology giant’s Systems Z mainframes, which are used mainly by companies in regulated industries such as finance and healthcare. Big Blue plans to add support for its POWER server family as well further down the line to make the platform more widely accessible.
The company recently bolstered the lineup by adding a system called the S822LC for High Performance Computing that’s specifically geared towards artificial intelligence workloads. A few months later, Big Blue teamed up with Nvidia Corp. to introduce an algorithm development toolkit optimized to make the most out of the server. Adding IBM Machine Learning into the mix could help make the company’s POWER series even more appealing for organizations that are working on AI projects.
“You need to divide and conquer the machine learning problem where the data scientist can play, the business analyst can play the app developers can play, the data engineers can play, and that’s what we’re enabling,” Thomas said.
(* Disclosure: TheCUBE is a media partner at the conference. Neither IBM nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)