

Multicloud is already a reality for a growing number of enterprises, and now they’re starting to demand more consistent experiences across the various public cloud platforms they use.
One of the best ways to ensure more consistent experiences is to be able to leave data in place and access it where it’s stored, instead of moving it around, bringing a cloudlike experience to that data. That’s what startup Vcinity Inc. is trying to do with its new “data access on demand” service.
“Latency matters — in fact, it’s critical — and the ability to access data at high speeds, wherever that data lives, will be a fundamental tenet of multicloud,” Wikibon Chief Analyst Dave Vellante said during a recent interview with Vcinity on SiliconANGLE’s mobile livestreaming studio theCUBE.
Vellante was joined in the studio by Craig Hibbert, vice president of enterprise markets at Vcinity, which has set out to reinvent wide area networks that span multiple data centers across the world. It’s doing so by transforming those data centers into what is essentially a kind of distributed global local area network that serves up data far more quickly than was previously possible.
Hibbert explained to Vellante how Vcinity employs a number of tricks to speed up data access across networks. The premise, he said, is to take the benefits of the traditional Transmission Control Protocol/Internet Protocol, which helps to ensure data arrives where it’s meant to be in one piece, and strip away all of the negative aspects of it, such as dropping packets.
“Typically, WANs are lossy networks,” Hibbert said. “Most people are accustomed to fiber channel networks, which of course are lossless. So what we’ve done is taken the beauty of TCP/IP but removed the hindrances to it, and that’s how we get to function at the same speeds as LAN over a WAN.”
Vcinity has more than 30 unique patents for proprietary technology it has developed that help to speed up data access, Hibbert said. He only touched on the technical aspects of how these all come together, but the end result is that Vcinity’s technology can deliver information across the web at Infiniband-like speeds.
“For instance, between our Maryland office and our San Jose office, it’s a 60-millisecond round-trip delay time,” Hibbert said. “And we can’t get beyond that. We can’t cheat physics, but what we can do is deliver a sometimes a 20-times payload inside that same RTT, so in essence you could argue that we’re beating the speed of light by delivering a higher payload.”
And there are numerous organizations in the world that could benefit from this kind of superfast data access, Hibbert said. He gave the example of a customer involved in seismic exploration that generates massive amounts of data on a daily basis. Previously, getting any insights from that data was extremely cumbersome, since it required bringing the data back to shore, copying it to a disk array and from there to multiple disk arrays around the world so people can analyze it.
With Vcinity things are much faster, Hibbert said. For example, the data can now be accessed almost as soon as it’s generated via a satellite connection, at LAN-like speeds.
Hibbert said Vcinity’s way of doing things is notably different from competitors that do WAN acceleration, which usually involves installing a special appliance that optimizes bandwidth to speed up data transfer times.
“One of the problems [with WAN acceleration] is it’s predicated on substantial caching of data,” Hibbert said. “The problem with that is once you turn on encryption, that compression and those deduplication or data reduction technologies are hampered in caching. But we have double layers of encrypted data and that does not affect our performance. So the massive underlying technological differences allow you to adapt to the modern world with encrypted data.”
Those abilities make Vcinity a seemingly perfect fit for the multicloud world, where companies can benefit from leaving their data in place and bringing different cloud services to that data. It can also benefit companies that are forced to leave data in specific geographic regions due to regulations such as Europe’s General Data Protection Regulation.
Another interesting use case for Vcinity is facilitating the remote use of supercomputers for things such as drug discovery, Hibbert said.
“If you think of pharmaceutical companies that have lots of data to process, whether it’s electromicroscopic data or nanotissue samples, they need heavy iron to do that,” he said, referring to supercomputers. “So, we can facilitate the ability to rent out supercomputers, and the pharmaceutical company is happy to do that because its data is not leaving the four walls. Just present the data and run it live, because we’re getting LAN speeds.”
The only downside to Vcinity’s technology is a small performance penalty on the first byte read for each new data set, but Hibbert said that would not be a problem for most customers. He used the analogy of a garden hose, which takes a few seconds to fill with water before we start seeing it come out the end. But once the water starts flowing, the stream remains constant, and that’s what Vcinity does with data.
“With TCP/IP, it’d be stop, start, stop, start, stop, start,” Hibbert said. “If you have to start doing retransmit, which is a regular occurrence of TCP/IP, the entire capacity of that garden hose will be dropped and then refilled. This is where our advantage is, the ability to keep that full and keep serving data.”
Here’s the full interview:
THANK YOU