UPDATED 03:45 EDT / NOVEMBER 30 2023

Opsramp Acquisition Price in HPE's New Deal CLOUD

HPE, Nvidia partner on AI-optimized platforms and services

Hewlett Packard Enterprise Inc. is tuning up its artificial intelligence portfolio with today’s announcement of a new set of hybrid cloud offerings for machine learning development, data analytics, AI-optimized file storage and fine-tuning of AI inferencing services.

The company said the services will be delivered on a platform that incorporates a combination of open-source software and infrastructure designed specifically for the data-hungry needs of AI model training.

“The speed with which enterprises can embrace and explore and experiment with how they can transform parts of the business operations is critical,” said Neil MacDonald, general manager of HPE’s Compute group. “We are delivering an out-of-the-box inferencing solution that enables customers to take pretrained models and deploy them into the environment and transform their operations, and to do it faster than they could before.”

Supercomputing edge

HPE said it will use its GreenLake platform to deliver a combination of AI-focused features that include a data-first pipeline, lifecycle management software, high-performance interconnects and support for an open ecosystem of third-party extensions. The company said the supercomputing expertise it gained from its 2019 acquisition of Cray Research Inc. gives it an edge in the market.

“You need supercomputing in your DNA with the ability to scale up to massive computing,” said Evan Sparks, HPE’s chief product officer for AI. “HPE has dealt with these problems in many other domains.”

The company is using its Discover Barcelona conference to announce an expanded collaboration with Nvidia Corp. to deliver a generative AI-specific computing platform built by Nvidia that will be optimized for training and tuning AI models using private data sets and custom software tools.

Based on HPE ProLiant DL380a hardware, the platform comes preconfigured with Nvidia L40S graphics processing units, BlueField-3 data processing units and Spectrum-X ethernet networking and is sized to fine-tune a 70-billion parameter Llama-2 model with 16 servers and 64 GPUs. It will feature an enhanced HPE Machine Learning Development Environment with generative AI studio capabilities for prototyping and testing, along with a version of HPE’s Ezmeral software with GPU-aware capabilities.

Also included is Nvidia’s AI Enterprise cloud data software stack for secure and manageable AI development and deployment and Nvidia’s NeMo cloud-native framework for model customization and deployment.

New types of applications

“There’s a new type of enterprise application that generative AI has enabled,” said Manuvir Das, vice president of enterprise computing at Nvidia. “It uses an AI embedding model to convert the data in a data warehouse into an embedding, which is a representation of what information means. Then you use a vector database to store these embeddings so that you can talk to your data, find all the information in your warehouse that best represents the answer and turn it into prompts to feed your large language model.”

AI-focused infrastructure will include GreenLake for File Storage, an all-flash unstructured data platform fine-tuned for model training and tuning. The platform features twice the performance density of existing GreenLake file storage and four times the throughput and connectivity with the Nvidia Quantum-2 InfiniBand networking platform.

An HPE Machine Learning Development Environment is now available as a managed service for model training. The company said the service reduces operational complexity and staffing needs for model development and has generative AI-specific studio capabilities for prototyping and testing.

Enhancements to the HPE Ezmeral platform for software containers now support a hybrid data lakehouse optimized for GPUs and compatible with the NFS file system and Amazon Web Services Inc.’s S3-compliant object storage. Enhance model training and tuning is provided by analytics software integrated with the Machine Learning Development Environment.

“We are working to create a single-pane-of-glass experience to maximize and govern your data wherever it is located,” said Mohan Rajagopalan, general manager of HPE Ezmeral Software.

The enhanced offering is optimized for Nvidia GPU allocations across workloads and provides access to third-party integrations with open-source Whylogs for data logging and Voltron Data Inc.’s framework for GPU-accelerated queries.

HPE also said its services organization will provide a broad range of consulting, training and deployment services supported by new AI and data-focused centers in Spain, the U.S., Bulgaria, India and Tunisia. Customers can place orders for the generative AI products and services beginning in the first quarter of next year.

Photo: Sundry Photography/Adobe Stock

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU