UPDATED 15:37 EST / JUNE 22 2023

AI

Unleashing the power of AI: HPE launches GreenLake for Large Language Models

The convergence of supercomputing and cloud technology has opened up exciting possibilities in the world of artificial intelligence.

Hewlett Packard Enterprise Co. has taken a leap forward by entering the AI cloud market with its GreenLake for Large Language Models service. This move signifies HPE’s commitment to providing advanced infrastructure that can handle the ever-growing demands of AI workloads, according to Justin Hotard (pictured), executive vice president and general manager for HPC in the AI business group at HPE.

“The bigger these models get, the more that you need supercomputing,” Hotard said. “If you just sort of step back and say, ‘Why do I run into this problem in the first place’ … you need big data sets, and the larger those data sets get, the harder it is to fit them into one computer, the more likely you’re gonna need to paralyze them across a lot of computers. That turns out to be what we’ve been doing in supercomputing for years.”

Hotard spoke with theCUBE industry analysts Dave Vellante and Rob Strechay at HPE Discover, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed HPE’s knowledge of supercomputing, how this experience gives HPE an edge and the company’s efforts to provide a single-tenant large language model that offers data protection and deployment flexibility. (* Disclosure below.)

The power of supercomputing in AI workloads

As AI models grow larger, the requirement for supercomputing becomes more apparent. Building accurate models, whether for large language models or scientific research, necessitates massive datasets. Supercomputers excel at handling these datasets by parallelizing the computations across multiple computers, allowing for efficient and accurate model training. In contrast, cloud environments typically operate multiple workloads on a single virtual machine, which may not be suitable for handling large-scale AI models.

HPE’s expertise in supercomputing, code optimization and hardware resilience gives it a distinct advantage over traditional cloud providers, according to Hotard. He sees the company’s entry into the AI cloud as complementary to what public cloud players are doing, offering an opportunity for partnership and extension.

“That creates new opportunity for startups to come in and build and create value. We’re already seeing a ton of that with consumer LLM applications and uses and workloads,” he said. “But I think we’re only scratching the surface, because there’s so much innovation in transforming the customer experience on a B2B side.”

The company’s ability to compile, optimize and distribute code reliably across multiple computers sets it apart. Additionally, its focus on solving hardware failures and maintaining code continuity in the face of such failures differentiates it from the traditional cloud or virtualized environment.

A strategy of providing a unified platform with HPE GreenLake enables organizations to seamlessly move data, train models and deploy solutions that align with their specific business needs. The compatibility between HPE’s offerings and open-source data fabric technology, such as HPE’s Ezmeral Unified Analytics Software, simplifies data movement, organization and persistence — a crucial aspect for AI workloads that heavily rely on data, Hotard explained.

The aim by HPE is to establish a model marketplace, where customers and partners can bring their own models and tools, while also offering best-of-breed models. Its initial focus with GreenLake for LLMs is to cater to the demand for single-tenant, privately trained models that ensure data protection. This addresses a currently unmet need in the market and provides enterprises with a dedicated environment to train their models securely.

“There’s quite a bit that we bring beyond the stack I was just talking about. There’s tools that enable programming and optimization training of large models. And those are things that we think are complementary,” Hotard said. “Our users can get started training AI models or building out some of their simulations in the public cloud. Then we can bring them over to a much larger system to run a much bigger model … then we’ve got a core customer base that comes to us for technical expertise — what I would almost call a technical cloud. I think that’s the other thing we bring to this market.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of HPE Discover:

(* Disclosure: Hewlett Packard Enterprise Co. and Intel Corp. sponsored this segment of theCUBE. Neither HPE and Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU