

Can computing bottlenecks and storage costs be slashed in one throw with data operations using read/write capable virtual data?
The database — and physical data in general — often chokes infrastructure agility with its manual procedures and costly space requirements, according to Kellyn Pot’Vin-Gorman (pictured), technical intelligence manager for the office of the chief technology officer at Delphix Corp.
“DataOps is the idea that you automate all of this — and if you virtualize that data, we found, with Delphix, that removed that last hurdle,” Pot’Vin-Gorman said during the Data Platforms event in Litchfield Park, Arizona, where she took the stage to speak on DataOps for big data.
“When you talk about a relational data or any kind of legacy data store, people are duplicating that through archaic processes,” she told Jeff Frick (@JeffFrick) and George Gilbert (@ggilbert41), co-hosts of theCUBE, SiliconANGLE Media’s mobile live streaming studio. (* Disclosure below.)
The result is siloed data that developers and others are constantly butting up against and skirting around, she explained. Delphix bursts this bottleneck by creating containers (a virtual method for running distributed applications) for virtualized data and agile deployment to multiple on-prem or cloud environments.
The virtual data is fully read and write capable and updates through snapshots that can be thought of as a “perpetual recovery state inside our Delphix engine,” she explained.
The implications for big data in the cloud, where storage costs are still higher than on-prem, should be clear, but in fact, Delphix is just now venturing into this territory, Pot’Vin-Gorman stated.
“We haven’t really talked to a lot of big data companies. We have been very relational over a period of time,” she said. Now customers are telling Delphix that their data stores have grown to bona fide big data proportions, and they need fitting solutions.
Many open-source big data projects are good candidates for DataOps due to their many moving pieces, Pot’Vin-Gorman stated. Containerizing them and deploying them just once with virtualized data that looks like it is deployed on multiple environments could save loads of effort, she said.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s independent editorial coverage of Data Platforms 2017. (* Disclosure: TheCUBE is a paid media partner for Data Platforms 2017. Neither Qubole Inc. nor other sponsors have editorial influence on theCUBE or SiliconANGLE.)
THANK YOU