As cloud services mature, partnerships are becoming increasingly important for providing customers with exactly the services they want, while also remaining competitive on pricing. SnapLogic and Snowflake Computing are now partnering to simplify and accelerate data integration and analytics in the cloud. The partnership includes technology integration and joint go-to-market activities to help organizations harness all data for new insights, better decisions and better business outcomes.
At this year’s AWS re:Invent conference in Las Vegas, NV, Ravi Dharnikota, head of enterprise architecture and big data practice at SnapLogic Inc., and Matthew Glickman, VP of product at Snowflake Computing, sat down with John Furrier (@furrier), co-host of theCUBE*, from the SiliconANGLE Media team, to talk about the details of their companies’ alliance and what they see as some of the big challenges of integration. (*Disclosure below)
“SnapLogic and Snowflake Computing’s stories are so aligned, from being cloud-native, from bringing self-service to data integration, and getting to quick insights,” Dharnikota said, pointing to those similarities as the foundation of their partnership and the reason for them to align their product functionalities.
Glickman identified being built cloud-native as the key point making the alliance great, as “architectures that are built that way just can perform better than architectures that were ported from on-prem and moved into the cloud,” he said.
Glickman also explained some of the technical design behind their architecture, with compute, storage and metadata separated from each other. The advantages of this, he said, were to allow customers to only use the compute they actually needed, while being able to put all users on it simultaneously, import data from partners into the whole and do analysis on the entire collection in one go.
“Making decisions on consistent data is the thing that, at some point, people gave up on, because they didn’t think it was possible to get the scalability and the concurrence you needed,” Glickman said. With the capabilities of this partnership, they’re trying to turn that way of thinking around, he said.
At the same time, there are challenges in the cloud environment, both from technical limitations and user misperceptions, Glickman acknowledged. “People think about the cloud enabling all this compute that you can have, and all these problems you can solve, but they underestimate what has to happen, and being able to handle that concurrency,” he stated.
But as customers become more familiar with the cloud environments, and what can be easily accomplished in them, a few of those drawbacks are becoming less severe. “What we’re finding is our customers, now that they’ve aggregated this data in one place, they want to run their actual business on that same data,” Glickman said, highlighting how this verifies the data as valid and worth retaining.
Taking a broad look at how things were developing, Dharnikota noted the need to retrofit to existing data when integrating for an enterprise. “So we provide the flexibility to do either a batch-type look back, and then do analytics on that, or real-time, by connecting to some of the real-time streaming engines, or do predictive analysis.” But as to “the plumbers,” their main concern is to handle data in any form that their customers want to use, he added.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of AWS re:Invent. (*Disclosure: AWS and other companies sponsor some AWS re:Invent segments on SiliconANGLE Media’s theCUBE. Neither AWS nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)