UPDATED 17:03 EST / NOVEMBER 04 2024

CLOUD

Rackspace offers GPUs as a cloud service with spot instance pricing

Rackspace Technology Inc. today announced an expansion of its spot instance service, Rackspace Spot, with a new location in San Jose, California, and launched a graphic processing unit-as-a-service option that gives customers access to Nvidia Corp. GPUs at prices determined by auction.

Rackspace Spot is a cloud infrastructure service that uses an open market auction model for cloud servers delivered as fully managed Kubernetes clusters. Users place bids for cloud server capacity, with resources allocated based on market-driven pricing.

Spot Instances are a cost-effective cloud computing option that allows users to utilize unused computing capacity at significantly reduced prices. While spot instances offer cost savings of up to 90%, they come with the risk of interruptions when the cloud provider reclaims capacity and are generally used for workloads that can tolerate such uncertainty.

The service is built on OpenStack, an open-source cloud computing platform that provides a framework for creating and managing public and private cloud environments. GPUaaS provides access to costly GPUs without substantial upfront hardware investments. Nvidia’s high-end H100 GPUs cost about $25,000 each when purchased from the supplier.

The addition of the Silicon Valley data center gives Rackspace seven global locations. West Coast customers will get lower latency and faster access with the recently introduced second generation onboarding process. Rackspace positions its cloud service as an open alternative to hyperscalers, with the combination of OpenStack, open application program interfaces and Kubernetes permitting workload portability.

“If you’re using an OpenStack private cloud and our OpenStack cloud, the APIs are the same,” said Kevin Carter, a Rackspace product director. “It truly gives you a multicloud platform with one API specification.”

The new GPU services encompass Nvidia’s top-of-the-line H100 and A30 GPU server classes.

The GPU A30 Virtual Server v2.++ Extra Large includes one A30 GPU, an Intel Corp. 6526Y CPU with 24 hyperthreaded cores, 128 gigabytes of memory, and multipath-enabled nonvolatile memory express encrypted storage with 25 gigabyte networking. The GPU H100 Virtual Server v2.Mega Extra-Large consists of one Nvidia H100 GPU, an 8568Y CPU with 48 hyperthreaded cores, 128 GB of memory and the same storage and networking capabilities.

Rackspace doesn’t estimate how much customers can save, but spot instances are typically at least 50% lower than reserved instances. The lowest published price for reserved H100 instances is about $2 an hour.

The company isn’t offering Nvidia networking or software as a native service, a decision that was made to give customers choice, Carter said. “It’s all Kubernetes based, so if you want native functionality, you use the Nvidia GPU operator,” he said. “If they want their own functionality, we’re not preventing them from doing that.”

Photo: Flickr CC

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU