AWS integrates liquid cooling and simplifies electrical distribution to lower data center power consumption
Amazon Web Services Inc. said today at its annual re:Invent conference that it’s making some major changes to its cloud computing infrastructure. Among other things, it’s introducing a new liquid cooling system and simplifying the way electricity is distributed throughout its facilities.
The cloud computing giant said the updates help to make its infrastructure services four times more efficient than typical on-premises environments, positioning it to handle the increased computing requirements of next-generation artificial intelligence workloads.
Not only are its data centers more efficient, but they can also help to reduce the carbon footprint associated with AI and other computing workloads by up to 99%, the company claimed.
The biggest change is that AWS is starting to roll out a new liquid cooling system for its AI servers and some other high-performance systems, including those powered by its homemade Trainium chips and Nvidia’s powerful graphics processing units.
Specifically, AWS said its new Trainium2 chips and Nvidia’s rack-scale AI supercomputing chip, the GB200 NVL72, will both benefit from the new cooling system. According to AWS, the liquid cooling will help to reduce mechanical energy consumption by up to 46% during peak cooling conditions, without any additional water being required.
What’s interesting is that its cooling infrastructure will remain flexible, as the new systems can also use air for cooling when liquid is not required. So, the hundreds of servers in AWS’s data centers that perform regular computing tasks, and those that handle networking and storage, for example, probably won’t be cooled using liquid. Such systems are powered by much less energy-intensive central processing units, which don’t generate nearly as much heat, meaning liquid cooling is probably overkill.
According to the company, “this flexible multimodal cooling design allows AWS to provide maximum performance and efficiency at the lowest cost, whether running traditional workloads or AI models.”
Adding to the liquid cooling is a brand new, simplified electrical distribution system that’s said to reduce the amount of potential failure points by up to 20%. It does this by minimizing the number of electrical conversions required, bringing backup power sources closer to its data center racks, and reducing the number of fans used to exhaust hot air from its servers. The company explained that it’s using the natural pressure differential instead of fans to expel this hot air, meaning more energy can be directed to its servers, reducing overall power consumption.
There’s also a new power shelf design being implemented that will increase rack power density by approximately six-times in the next two years, followed by another three-times increase in the future. It will enable AWS to generate a 12% increase in compute power at every site it operates, helping it to reduce the total number of facilities it needs to operate.
The company explained that many of these changes are being made to power AI workloads, but it’s also using AI itself to try and improve the efficiency of its data center designs. The company has created AI models that can design more efficient rack configurations within each data center facility it owns, to reduce the amount of underutilized power. AI will also be integrated in a new control system for electrical and mechanical devices within its data centers, with integrated telemetry services, enabling real-time diagnostics.
Another way AWS is reducing its carbon footprint is by switching to a new type of renewable diesel for its backup generators. In future, they’ll run on what the company describes as a “biodegradable and non-toxic fuel” that results in 90% fewer carbon emissions over the course of its lifetime compared to regular diesel made from fossil fuels.
Finally, AWS said it’s making some changes to the way it builds its data centers, with plans to use lower-carbon steel and concrete. Apparently, the steel will be produced by electric arc furnaces rather than gas-fired furnaces, meaning fewer carbon emissions resulting from its production.
Meanwhile, the carbon in its concrete mix has been reduced by up to 35% compared to the construction industry average. It will also reduce the overall amount of steel and concrete that it uses, thanks to the new, AI-optimized rack placement system it’s using.
AWS plans to implement these changes globally in every new data center it builds, with some also coming to its existing facilities.
“These data center capabilities represent an important step forward with increased energy efficiency and flexible support for emerging workloads,” said Prasad Kalyanaraman, head of AWS Infrastructure Services. “But what is even more exciting is that they are designed to be modular, so that we are able to retrofit our existing infrastructure for liquid cooling and energy efficiency to power generative AI applications and lower our carbon footprint.”
Photos: AWS
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU