

Culling meaningful, usable data with predictive power out of the chaff heap is challenging enough; feeding enough of it to AI and machine-learning algorithms to produce intelligence is proving to be even more challenging. Are there any ways we can make data tools smarter about sorting data so we don’t have to?
Dr. Sarah Cooper, chief operating officer at M2Mi Corp., spoke about the difficulty of figuring out the unknown processes that need to be automated in data tools. “You’ve got a ton of data coming in — most of it is very low value,” she told John Furrier (@furrier), cohost of theCUBE, from the SiliconANGLE Media team, during the IBM Open Cloud Architecture Summit.
“There’s an incredible amount of intelligence that goes into filtering and determining what is actually valuable data and getting that up into the decision and interpretation tools like the Big Data, the machine learning, Spark, some of the streaming analytics as well,” Cooper explained.
Timing specifically — figuring out what questions need to be asked today, next week, or in five years, “is dramatically complex in the Internet of Things,” she said.
“There’s a company out there called Resin.io that’s putting containers on embedded Linux so that you can do hot swap of applications on your end point. And that type of functionality pushes the DevOps model that we’ve all been talking about in the data center down towards the edge,” she said.
Cooper is optimistic about this development making IoT easier to improve and execute. “And now you’ve got all of your IT guys who can move into IoT without the huge learning curve,” she concluded.
Watch the full interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of the IBM Open Cloud Architecture Summit.
THANK YOU