When it comes to Big Data, it’s “one thing to be able to query it, but it’s another thing to be able to actually ask that data meaningful questions,” according to Revolution Analytics head of marketing and community David Smith. The executive dropped by theCUBE at SiliconANGLE’s recent Big Data NYC 2013 summit to discuss how his firm is helping customers make the most out of their information.
Founded in 2007, Revolution Analytics provides software and services based on the open source “R” statistical computing language. The latest version of the company’s flagship platform “moves the computation to the data” in order to accelerate Hadoop processing and predictive modeling.
“We think about Hadoop as being a data storage mechanism, but also recognize that these Hadoop clusters have dozens, sometimes hundreds, of CPUs – computational processors – in them. If you can apply them to these predictive models, you got this computational power house – a massively parallel scheme – that you can use to actually build these models,” Smith explains.
With cloud services picking up steam in the enterprise, CIOs and practitioners are struggling to find the right solution for their use cases. To help companies future-proof IT investments, Revolution R Enterprise 7 provides “write once, deploy anywhere” functionality in the form of re-purposed parallel external memory algorithms.
Smith details that unlike the algorithms included in vanilla R, which are single-threaded, Revolution’s are optimized for processing multiple data sources in distributed parallel environments. The software enables data scientists to take full advantage of the computational capacity of Hadoop without having to worry about the plumbing.
Watch the full interview for more on R, data science, and Revolution Analytics’ position in the Big Data food chain.