How Google’s New Cloud Storage Feature is Helping Developers Move Data Faster
Google announced new features to its Cloud Storage service, making it easier to manage objects, and faster to access and upload data.
The search giant introduced Object Lifecycle Management, which allows users to configure auto-deletion policies for your objects and can be used Object Versioning to limit the number of older versions of your objects that are retained., Regional Buckets which allows you to co-locate your Durable Reduced Availability data in the same region as your Google Compute Engine instances to keep your data near your computation, and gsutil version 3.34 or automatic parallel composite uploads which now automatically uploads large objects in parallel for higher throughput.
SiliconANGLE Founding Editor Mark “Rizzn” Hopkins states that Google’s effort doesn’t bring its cloud service up to par with Amazon’s S3, but it does signify that the service has matured and its new features could encourage more developers use the service. Some may think that it’s a play to emulate Amazon’s services, as Hopkins’ sources say there’s a deeper reason for the upgrades.
“Not to emulate but to come up to feature parity, for certain. This is one of the things that I talked about over at Google I/O. A lot of the developers on Google Partners had a multi-vendor strategy meaning that they’re in the Google ecosystem, they use Google Compute Engine for specialized purposes but they also have another leg firmly entrenched in the Amazon Web Services ecosystem for a variety of reasons, one of which being maybe they started out there it’s like turning a battleship sometime and you’re moving large amounts of data or specially designed app that lives in certain types of ecosystem. It may not be compatible, easily made compatible with Google‘s ecosystem.
“But another reason that Google has made this play, the other thing that I heard, the storage is sorely lacking. Because of the lack of robust features in the Google Storage feature set, there were a lot of users that weren’t using Google’s services to store their data and thus it was making it difficult to design apps that live in the Google ecosystem that have to pull data from Amazon ecosystem that makes everything run slower when you’re pulling across ecosystems. But if you’re pulling within the same ecosystem, of course it’s gonna go a lot faster – inside the same data center the data has to travel much less distances,” Hopkins explained.
Hopkins mentions that there may be a hidden feature in Regional Buckets wherein Google can put servers outside jurisdiction lines so that you can better protect your data and can’t be easily obtained by the authorities. Though this feature may exist, it may not be of any help right now since it will only be available in the US and EU, which pretty much collaborates with one another when it comes to extradition.
For more of Hopkin’s Breaking Analysis, check out the NewsDesk video below:
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU