

Having data is one thing; having it where you want it is another. Moving data at scale is a problem that causes headaches for even the most experienced, data-first companies. It’s not like there’s a “man with a van” to call for fast, secure transport from the data center to the cloud … or multiple clouds. Or maybe there is.
“Data really is the lifeblood of organizations today, and if that stops moving, or stops circulating, well, there’s a problem,” said Anthony Brooks-Williams (pictured), chief executive officer of HVR Software Inc. “So what we do is we move critical business data around these organizations.”
Brooks-Williams spoke with Dave Vellante, host of theCUBE, SiliconANGLE Media’s livestreaming studio, for a digital CUBE Conversation about HVR’s capabilities for high-volume data replication powered by log-based change data capture. (* Disclosure below.)
[Editor’s note: The following content has been condensed for clarity.]
So we should think about you as a high-speed kind of data mover; efficiency at scale. Is that right?
Brooks-Williams: At our core, we are CDC — change data capture — moving incremental workloads of data, moving the updates across the network, combined with the distributed architecture that’s highly flexible and extensible. So it’s moving as much data as possible but in a very efficient way.
So really the problem that you’re solving is getting all that data to a place where it actually can be acted on and turned into insights.
Brooks-Williams: Absolutely. Data is created in a number of different source systems, and our ability to support each of those in this very efficient way, using these techniques such as CDCs, is to go and capture the data at source and then move it together into some consolidated platform where [a company] can do the type of analysis they need to do on that. And, obviously, the cloud is the predominant target system of choice.
So we support a number of different technologies in there. But yes, it’s about getting all that data together so they can make decisions on all areas of the business.
It’s hard to move data at scale. So what’s the secret sauce that that allows you to be so effective at this?
Brooks-Williams: It starts with how you are going to acquire data. You want to do that in the least obtrusive way to the database. So we’ll actually go in and we read the transaction logs of each of these databases. Then, if you want to move data across a wide area network, the technique that a few companies use, such as ourselves, is change data capture. And you’re moving incremental updates, incremental workloads, the change data across a network.
But then combine that with the ability that we have around some of the compression techniques that we use, and then just into very distributed architecture and seeing that how that really fits in today’s world of real time and cloud. Those are table stakes things.
Now, of course, you’ve got to initially seed the target. You use data-reduction techniques so that you’re minimizing the amount of data. And then what?
Brooks-Williams: There’s an initial concept where you take a copy of all of that data that sits in that source system, and replicating that over to the target system, you turn on that CDC mechanism, which is then moving that change data. At the same time, you’re compressing it, you’re encrypting it, you’re making sure it’s highly secure and loading that in the most efficient way into their target systems.
The key thing is that we also have this compare-and-repair ability that’s built into the product. So we will move data across, and we make sure that data that gets moved from A to B is absolutely accurate. People want to know that their data can move fast, they want it to be efficient, but they also want it to be secure. They want to know that they have a peace of mind to make decisions on accurate data. The whole aim is to make sure that the customer feels safe, that the data that is moving is highly secure.
Are you running in the cloud, on-premises, or both? Or across multiple clouds? How does that work?
Brooks-Williams: All of the above. What we see today is a majority of the data is still generated on-prem and then the majority of the tasks we see are in the cloud. So, absolutely, we can support the cloud to cloud; we can support on-prem to cloud. The source of target systems can sit on-prem or in the cloud.
We often say data is plentiful, insights aren’t. You’re betting on data; that’s kind of the premise here, that the data is going to continue to grow. How’s it looking?
Brooks-Williams: We had our best quarter ever in Q2 — 193% year over year growth. We’ve been building this engine for a few years now, and it’s really clicked into gear due to COVID. Projects that would have taken nine, 12 months to happen, they’re taking a month or two now. It’s been getting driven down from the top.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s CUBE Conversations (* Disclosure: HVR Software Inc. sponsored this CUBE Conversation. Neither HVR nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.