UPDATED 14:00 EDT / JUNE 20 2023

AI

HPE debuts GreenLake for LLMs, an advanced cloud for generative AI workloads

Hewlett Packard Enterprise Co. is expanding its flagship HPE GreenLake portfolio to provide access to large language models that power generative artificial intelligence.

Enterprise will now be able to access LLMs on-demand via a multitenant supercomputing cloud service in order to train, fine-tune and deploy AI models via a sustainable supercomputing platform, HPE said.

HPE GreenLake for Large Language Models, announced at HPE Discover in Las Vegas today, is delivered in partnership with the German AI startup Aleph Alpha GmbH, which provides users with ready-to-use LLMs for use cases that require text and image processing and analysis. The offering is said to be the first in a series of planned industry and domain-specific AI applications from HPE. Others will support use cases such as climate modeling, healthcare and life sciences, financial services, manufacturing and transportation.

HPE said GreenLake for LLMs runs on HPE Cray XD supercomputers, meaning customers won’t need to pay for or rent these resources themselves. HPE Cray XD provides an AI-native architecture that’s specifically designed to run single, large-scale AI training and simulation workloads at full compute capacity. According to HPE, it can support AI and high-performance computing workloads running on hundreds or even thousands of central processing units and graphics processing units at once, thereby providing the infrastructure required to train and create more accurate AI models.

Customers will be able to access a pretrained LLM called Luminous, which was developed by Aleph Alpha and enables customers to leverage their own data to train and fine-tune customized AI models. With this service, customers will be able to build various kinds of AI applications and integrate them into their own business workflows. They’ll also get access to HPE’s Machine Learning Development Environment and Machine Learning Data Management software, which provides the tools required to rapidly train AI models and integrate, track and audit the data they are trained on.

“By using HPE’s supercomputers and AI software, we efficiently and quickly trained Luminous, a large-language model for critical businesses such as banks, hospitals and law firms to use as a digital assistant,” said Aleph Alpha Chief Executive Jonas Andrulis. “We are proud to be a launch partner for HPE GreenLake for LLMs. [We will] extend Luminous to the cloud and offer it as-a-service to end customers to fuel new applications for business and research initiatives.”

Constellation Research Inc. analyst Holger Mueller said HPE is entering an increasingly crowded field, with numerous infrastructure providers vying to power the new generation of generative AI workloads. But he said there’s plenty of room for HPE too, because the compute requirements of companies looking to build such models are enormous.

“HPE really has an ace up its sleeve with its Cray supercomputers, which have the perfect architecture to process LLM workloads,” Mueller said. “It’s also open to working with multiple AI partners, beginning with Aleph Alpha. It may appeal especially to data safety-conscious European customers, who can use HPE GreenLake for training in the cloud, and then do inference onsite, creating more workloads for the company’s ProLiant servers.”

Mueller’s Constellation colleague Andy Thurai added that HPE’s offering is unique from that of the big cloud providers and may appeal to applications and industries such as climate modeling, healthcare and life sciences, financial services, manufacturing and transportation. “Building domain-specific platforms in those specific industries can bring value which is hard to do with public clouds,” he said.

Also unique, he said, is HPE’s hybrid computing approach. “Training LLMs has been an issue in hybrid locations because of the compute power required and because of the ecosystem that needs to be built,” he said. “HPE supercompute as a service with built-in AI and an LLM model training ecosystems is hoping to offset that.”

All that said, Thurai noted that HPE is going after mature AI workloads, not the newer, more innovative ones. As a result, he said, “the volume and total available market will be very limited to those customers. Public cloud providers will come up with a mechanism to keep the initial innovative cloud workloads form moving out of there.”

HPE said HPE GreenLake LLMs will be available on-demand. It said it’s accepting orders now and expects to be up and running in North America by the end of the year, and in Europe in early 2024.

Image: rawpixel/freepik

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU