Google Tweaks its Cloud Storage Platform for Enriched Dev Data Management
The Google Cloud Platform offers a broad set of application development, cloud storage, large scale computing, and big data capabilities. In order to return easier object handling and faster load data access for developers, Google has decided to introduce three new features on its Cloud Storage platform to accelerate and simplify the management of data and objects.
The update additions include Object Lifecycle Management, Regional Buckets, and automatic parallel composite uploads via gsutil. The new features aim to make it easier for developers to manage, access and upload data into the cloud.
Developers’ Rich Enhancement
Google cloud storage platform enables developers to implement Cloud app solutions, such as mobile apps, social apps, business process apps, and websites; cloud storage solutions such as high-end backup and recovery, active archiving, global file sharing/collaboration, and primary SAN/NAS; and large-scale computing solutions such as batch processing, data processing and high performance computing.
Just like Amazon AWS, Google now offers Object Lifecycle Management. The Object Lifecycle Management means developers can now develop and configure auto-deletion policies for their objects. It will help developers to automatically delete objects on certain conditions.
According to blog posting by Brian Dorsey, a Google developer programs engineer, “developer could configure a bucket so objects older than 365 days are deleted, or only keep the three most recent versions of objects in a versioned bucket. Once you have configured Lifecycle Management, the expected expiration time will be added to object metadata when possible, and all operations are logged in the access log.”
The Object Lifecycle Management can be used in conjunction with Object Versioning to limit the number of older versions, while maintaining a cost-efficient level of protection against accidental data loss due to user application bugs or manual user errors.
Regional Buckets feature allows developers to co-locate their Durable Reduced Availability data in the same region as their Google Compute Engine instances. The buckets can reduce latency and increase bandwidth to virtual machines, which may be particularly appropriate for data-intensive computations.
Developers can of course still have ultimate control over which data centers are used, Dorsey wrote. “You can still specify the less-granular United States or European data center locations if you’d like your data spread over multiple regions, which may be a better fit for content distribution use cases.”
Finally, the upload improvements in gsutil version 3.34 automatically upload uploads large objects in parallel using multiple connections for higher throughput. The support is built using Composite Objects and achieve maximum TCP throughput on most networks requires multiple connections.
“If you’re managing temporary or versioned objects, running compute jobs over Cloud Storage data, or using gsutil to upload data, you’ll want to take advantage of these features right away,” Google says.
DevOps ANGLE
Google latest updates bring it closer to feature parity with Amazon’s Web Services (AWS) and Microsoft in terms of capability. Although SiliconANGLE Founding Editor Mark “Rizzn” Hopkins notes that Google’s latest effort may not win many developers’ hearts, but it does signify that the service has matured and its new features could encourage more developers use the service.
“Not to emulate but to come up to feature parity, for certain. This is one of the things that I talked about over at Google I/O. A lot of the developers on Google Partners had a multi-vendor strategy meaning that they’re in the Google ecosystem, they use Google Compute Engine for specialized purposes but they also have another leg firmly entrenched in the Amazon Web Services ecosystem for a variety of reasons, one of which being maybe they started out there it’s like turning a battleship sometime and you’re moving large amounts of data or specially designed app that lives in certain types of ecosystem. It may not be compatible, easily made compatible with Google‘s ecosystem,” he says.
Hopkins mentions that because of the lack of robust features in the Google Storage feature set, there were a lot of users that weren’t using Google’s services to store their data and thus it was making it difficult to design apps that live in the Google ecosystem that have to pull data from Amazon ecosystem that makes everything run slower when you’re pulling across ecosystems.
Earlier in June, Google opened its Google Maps Engine API to developers to build consumer and business applications using the new features and flexibility of Google Maps. Google also unveiled a new Cloud Playground environment where developers can focus on building interesting applications instead of worrying about managing infrastructure.
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU