UPDATED 16:47 EDT / SEPTEMBER 02 2021

CLOUD

AWS debuts new data storage capabilities at AWS Storage Day

Amazon Web Services Inc. today introduced a host of additions to its data storage portfolio headlined by the new Amazon FSx for NetApp ONTAP service, a managed implementation of NetApp Inc.’s popular file system.

The cloud giant made the announcements at its annual AWS Storage Day virtual event.

NetApp ONTAP is a popular file system that companies use to manage the information produced by their mission-critical applications. The file system makes it easier for applications to access data they need to carry out business tasks. The software performs various related tasks in the background, such as compressing records to free up storage capacity.

NetApp ONTAP was already available on AWS before, but companies had to set up deployments on their own and then manage day-to-day maintenance internally. AWS’ new Amazon FSx for NetApp ONTAP service promises to significantly reduce the amount of work involved in using the file system. The service allows administrators to set up a deployment in only a few minutes, the cloud giant says, and it automatically performs a range of maintenance tasks.

Amazon FSx for NetApp ONTAP could potentially help many enterprises accelerate their shift to the cloud. A large number of companies rely on on-premises NetApp ONTAP file system deployments to manage parts or all of their mission-critical business data. By making it easier to run the file system in the cloud, AWS can reduce the complexity associated with moving on-premises data workloads to its platform and thus incentivize more firms to make the switch. 

Amazon FSx for NetApp ONTAP “provisions the file servers and storage volumes, manages replication, installs software updates & patches, replaces misbehaving infrastructure components, manages failover, and much more,” detailed AWS Chief Evangelist Jeff Barr. 

The offering debuted alongside two updates to AWS’ popular Amazon S3 object storage service. 

The first update introduces a feature called Amazon S3 Multi-Region Access Points. It’s designed for building multi-region applications, or applications that store multiple copies of their information in several different AWS data centers. Having multiple copies of records ensures they will remain available even if one data center experiences an outage.

Companies also build multi-region applications to improve performance. If there are multiple copies of a workload’s information, a firm can enable each user to load the information from the data center closest to them and thereby reduce latency. 

Until now, creating multi-region applications on S3 required writing a fairly large amount of code to manage the logistics of managing the different data copies. With Amazon S3 Multi-Region Access Points, AWS is significantly reducing the amount of code required.

“With S3 Multi-Region Access Points, you can build multi-region applications with the same simple architecture used in a single region,” AWS Developer Advocate Alex Casalboni wrote in a blog post.

The second S3 update announced today upgrades an existing feature of the storage service called S3 Intelligent-Tiering. Companies that store their data in the service can choose from several storage hardware tiers that vary in price and performance. The S3 Intelligent-Tiering feature automatically places a company’s data on the most suitable hardware tier to optimize costs. 

As part of today’s update, AWS has removed certain S3 Intelligent-Tiering monitoring and automation charges related to data objects smaller than 128 kilobytes. The cloud giant is also removing the requirement to store objects for a minimum of 30 days.

On the occasion, the cloud giant said that it’s implementing a feature similar to S3 Intelligent-Tiering in another one of its storage services, the Amazon EFS file storage service. The aptly called Amazon EFS Intelligent-Tiering capability works much like its S3 counterpart, automatically sending companies’ data to the most suitable infrastructure tier.

The cloud giant’s AWS Storage Day virtual event today saw the introduction of several smaller enhancements as well. An upgrade to Amazon EBS now allows companies to use the block storage service to create cloud  backups of workloads containing as much as 64 terabytes of data. The AWS Transfer Family service for moving data to AWS, in turn, has been enhanced with low-code features that will reduce the amount of manual work required to copy information to the cloud. 

Image: AWS

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU