UPDATED 00:34 EST / NOVEMBER 27 2023

CLOUD

A re:Invent exclusive: AWS CEO Adam Selipsky to reveal a new generative AI stack

As the rush to generative artificial intelligence only gains more momentum every day, Amazon Web Services Inc. Chief Executive Adam Selipsky think there’s one core capability to ensure it fulfills its promise: adaptability.

“With all of the different directions that generative AI, and AI, are going to go in, the key characteristic that people need to have is adaptability,” Selipsky said in an exclusive interview with SiliconANGLE before AWS re:Invent in Las Vegas. “AI will branch out in various directions, but when it does so simultaneously, being adaptive will be key to who wins.”  

Selipsky’s quote encapsulates the essence of what promises to be a dynamic re:Invent, the premier cloud computing event of the year. A mere year ago, at re:Invent 2022, generative AI was scarcely a topic of discussion, coming just days after OpenAI introduced its revolutionary ChatGPT chatbot that has since taken the world by storm.

In just one year, gen AI has become central to the tech world’s evolution. AWS is now at a critical juncture, facing perhaps its most significant challenge and transformation in cloud computing since establishing itself as an industry leader.

In a comprehensive and far-reaching dialogue, the full text of which is available here, Selipsky and I engaged in an open discussion about the multifaceted impact of generative AI. We touched upon various aspects including its influence on business, the role of silicon chips, the evolution of the technology stack, and the competitive landscape. The CEO also delved into how AWS is navigating the innovations brought about by generative AI, exploring the implications not just for AWS itself but also for its customers and competitors in the industry.

The elephant in the room: the OpenAI situation

The recent drama surrounding OpenAI and Microsoft Corp. has raised global concerns about AI’s risks, safety, security, performance, cost and even viability. My interview with Selipsky concluded just an hour before CEO Sam Altman’s departure from OpenAI, preventing me from directly asking about this development.

However, in a follow-up, Selipsky shared his thoughts on the situation (see full quote below) and the recent drama, emphasizing the importance of customer choice in AWS’ generative AI strategy, a principle established nearly a year ago. “It’s essential for companies to have access to a variety of models; no single model or provider should dominate,” he said. “The recent events have reinforced the soundness of AWS’ chosen approach.”

Selipsky said he believes that “reliable models and dependable providers are essential, as are cloud providers who offer choices and are committed to technology that supports these choices.” AWS, he contends, “has advocated and implemented these principles for over 17 years, and recent developments in generative AI have further highlighted their importance.”

When discussing the competition, Selipsky refrained from naming Microsoft directly but made it clear whom he was referring to. He expressed his astonishment at how “other providers” released early versions of AI offerings without comprehensive security models. “It boggles my mind how other providers have released early versions of their AI offerings essentially without a security model,” he said.

For context I’ve included Selipsky’s full comment on the OpenAI situation:

“We have been saying since we started laying out our generative AI strategy, almost a year ago, how important it is for customers to have choice. Customers need multiple models – there is no one model and no one model provider to rule them all. The events of the last few days have shown the validity and the usefulness behind the strategy that AWS has chosen. Customers need models they can depend on. Customers need model providers that are going to be dependable business partners. And customers need cloud providers who offer choice and offer the technology that enables choice. Those cloud providers need to want to offer choice as opposed to being driven to provide choice. The importance of these attributes, which AWS has been talking about and acting on for more than 17 years, has become abundantly clear to customers in relation to generative AI.”

Generative AI business growth

While generative AI dominates the conversations among enterprises and developers, cloud growth has slowed as customers continue to “rightsize” their cloud spend to adjust to economic pressures and prepare to invest in new generative AI infrastructure and applications.  

Selipsky notes that most cost optimization is complete, with increasing activity around new workloads such as generative AI. Industry observers are now analyzing the dynamics of cost optimization versus AWS’s handling of long-term contracts and their financial impact. To win the battle for cloud supremacy, AWS must keep growing rapidly, maintain a strong cloud AI product portfolio and increase its market share. As the market transitions from gen AI experimentation to large-scale workloads, significant growth is expected to return.

“Growth rates have stabilized, and we’re cautiously optimistic,” Selipsky explains. He continues, “many customers have completed their cost optimization, and we’re hopeful for increased growth. Looking at the mid to long term, we feel very optimistic about the outlook for strong AWS growth. We’re still in the early days of the cloud”.

Building generative AI apps

Questions arise about building and deploying generative AI models and applications in a rapidly evolving AI landscape. AWS’ answer is a new foundational architecture for gen AI, comprising a three-layer technology stack. This stack includes an infrastructure layer, a foundation model service layer and an AI applications layer in an attempt to make it easy for customers to innovate at all three levels.

Dave Vellante, chief analyst at theCUBE Research, SiliconANGLE Media’s market research arm, says customers seek to build systems or platforms leveraging chips, data, foundational models and cloud services across all three of these layers to foster more productive and adaptable gen AI applications.

Selipsky points out the similarities between the AWS generative AI tech stack and the layers of other AWS cloud services. “Our unique generative AI stack offers customers advantages over alternative clouds,” he says. “Not all competitors have chosen to innovate at every layer, and one must wonder how long it will take them to catch up.”

It’s clear that AWS’ main strategy on winning AI cloud supremacy is grounded on both continuing to enhance its cloud infrastructure and building out a unique gen AI tech stack for the market. Selipsky believes that the benefits of the new AI stack from AWS will be focused on model choice, silicon cost and performance, and AI developer success in building, training and running foundational models of a new generation of AI applications.  

Silicon for gen AI

AWS’ track record in silicon technology, demonstrated by the Nitro hypervisor and multiple generations of Graviton, Trainium and Inferentia chips, gives it a significant edge in the evolution of cloud and generative AI. Selipsky explained the tangible benefits of these innovations, noting the importance of balancing computational power with cost considerations. “Gen AI workloads are incredibly compute-intensive, so price performance is absolutely vital,” he said. This approach positions AWS as a major cloud provider in the generative AI market.

AWS should be expected to continue the tradition of making big strides in chip technology with the Graviton series, with Graviton 3 in the market for over a year. Each generation of AWS’ custom chips has delivered industry-leading price performance and energy efficiency. According to Selipsky, it’s extremely important that AWS keep advancing general-purpose computing and innovating at the silicon layer in technology in compute and storage. 

Model choice with silicon differentiation

In the race for superior AI foundational models, AWS’ strategic partnership with Anthropic, which includes OpenAI alumni, is a significant component of its Foundation Model service layer. The collaboration with Anthropic, backed by a substantial investment of $1.25 billion, potentially growing to $4 billion, contrasts with OpenAI’s partnership with Microsoft. AWS offers multiple models through its Amazon Bedrock foundation model service, supported by generations of infrastructure and silicon capability.

Selipsky highlighted the strategic importance of Anthropic’s role in AWS’ Bedrock service, noting the startup’s insights in making chips faster, more efficient and more cost-effective for demanding AI workloads. “Anthropic’s need for large amounts of compute capacity to train their models can help us enhance our Trainium and Inferentia developments,” he explained.

The CEO also hinted at exclusive customization features around AWS services, available only through Amazon Bedrock and Anthropic’s first-party offering. “There’s going to be important fine-tuning and customization features, which will only be available for limited periods of time on Amazon Bedrock and through Anthropic’s first party offering,” explained Selispky. “Not through any other channel.”

Discussing the insights from the AWS-Anthropic chip development, Selipsky revealed, “we’ll gain insights to make chips more performant, increase throughput, and enhance efficiency and performance.” This collaboration not only strengthens AWS’ position in generative AI but also increases the competition with the other cloud players. 

With the silicon edge that’s paired with foundation models, AWS hopes to gain insights into how models are built at scale and then offering new silicon services that only they can offer customers.  Selipsky explained the depth of the Anthropic partnership: “They will be developing their future models on AWS, they’ve named AWS their primary cloud provider for mission-critical workloads,” Selipsky said. “And they’re going to be running the majority of their workloads on AWS.” 

The GPU conundrum

Nvidia’s dominance in GPUs significantly influences the gen AI cloud computing market. However, Selipsky pointed out that AWS’ philosophy embraces both custom silicon and strong partnerships with companies such as Nvidia Corp. “Amazon loves the word ‘AND’,” he said. “We have a great and strong relationship with Nvidia, AND AWS is a leading host for GPU-based capacity for generative AI.”

He addresses the complexity of building effective gen AI infrastructures, saying it involves more than just GPUs. It’s about creating clusters that are highly performant, reliable, cost-effective, and energy-efficient. The challenge lies in integrating silicon chips with critical infrastructure services like networking, storage, and scalability.

The discussion around chip technology in the AI sector extends far beyond the chips themselves. It encompasses a broader cloud ecosystem involving infrastructure, data services, and cloud services, all intricately connected to how chips function. The use of silicon chips is just one piece of a complex puzzle in managing varied and sophisticated AI workloads.

The real challenge in generative AI lies in integrating these chips with critical infrastructure services like networking, storage, and the scalability of clusters. This integration is vital for the future of generative AI as workloads grow in diversity and complexity. 

Selipsky thinks customers recognize that it’s not just about having chips, but also about having highly performant services around the chips, such as networking inside the clusters. “We’ve seen customers go and investigate it [their own GPU clusters] and then come running back to us saying, you know, having chips is great, but it doesn’t actually work,” he said. 

Future of foundation models: openness and choice

In the quest generative AI success, the true power lies not just in the training of models but more significantly in the phase of inference, where real value and insights are extracted from data. This distinction between training and inference is a pivotal aspect of AI’s practical application. At the recent KubeCon conference, it was noted that inference is the new web app, meaning inference will be the key ingredient for leveraging data to power new web apps, which will be AI applications.

“Data is the key thing,” Selipsky agreed. “First, you do need really great models. People are going to want models from multiple providers.”

That reality speaks to the industry’s debate on open models that offer a diverse range of models catering to various needs and AI workload use cases. When asked if there will be one model to rule them all, Selipsky swiftly dispelled the notion: ‘There’s not going to be one model to rule the world,” he said. “The idea that there’d be one company or one model is preposterous to me.’

Selipsky envisions a future in gen AI where the heterogeneity of models from multiple suppliers, in different sizes and capabilities, is key. ‘There will be multiple models,” he said. “There will be some very large, highly capable, general-purpose models. And there will also be smaller, specialized models.”

He points out that these smaller models have two major advantages: They can be fine-tuned for specific queries, and they offer better price-performance ratios. This approach reflects a nuanced understanding of the different needs of AI applications.

‘There may be cases where the much larger model will deliver 5% better answers at twice the price,” Selipsky noted. “And for certain questions, you need better answers than those provided by the large model. For many situations, you’re more than happy to pay half and get an answer that is almost as good as the large models. It’s a tradeoff.”

The foundational model layer for gen AI will create more opportunities for creative value creation when models are chosen for the right task and with the right infrastructure to support them. As a result, there is a growing interest and rapid experimentation in developing generative AI platforms and applications, yet the transition to actual production workloads remains in its early stages. According to Selipsky, this scenario will evolve rapidly as more companies master the art of constructing, deploying and operating gen AI infrastructure and development environments that effectively use their data.

Selipsky emphasized again the importance of adaptability in generative AI: “With AI branching out in various directions, being adaptive will be key to who wins when these developments occur simultaneously,” he said. “The benefits of gen AI come when things are easy to use and simple. It’s this kind of experimentation that is only possible if it’s simple to move back and forth between models. That’s why the choice we’re offering at the middle of the stack is so important.'”

Deploying and applying gen AI

Although the current focus is on gen AI model training, a significant shift toward inference is expected, emphasizing the ability to interpret and utilize data effectively. “You need training and you need inference,” Selipsky said. “Today, there’s a lot of focus on training because it’s early and we’re building models to deploy generative AI applications. Once this happens, the emphasis and resource allocation in gen AI will shift more toward inference.” 

Enterprises are increasingly recognizing that their data is a key competitive advantage. As gen AI transforms the application layer and user interface, the approach to data, viewed as intellectual property and a competitive edge, needs to be rethought.

Establishing an effective data strategy and a robust data platform is essential, especially for those excelling in gen AI. The focus is now shifting toward data management, set to become as crucial as model management and model security. This shift represents a new approach to data management, altering the traditional methods. Changes in the technology stack will have significant implications, particularly for data, which in turn affects applications.

“Great models will be built, and they are going to be important,” Selipsky said. “Both you and your competitors will have equal access to these models. However, your data will be what sets you apart. Our customers are rapidly advancing in generative AI because they already have a strong data strategy implemented on AWS. The more the generative AI understands or has data about how your company operates, including your code bases and libraries, the more powerful it’s going to become for developers.”

An effective data strategy also requires a thorough understanding of the available data, ensuring it’s harmonized and usable across various applications. Key challenges include the need for multiple database engines rather than just one, and a range of analytics services, each with extensive capabilities.

Governance is crucial as well, as it ensures companywide awareness of available data and manages permissions appropriately. This means that different individuals have varying access permissions to different data sets, and these permissions must be consistently upheld across all services used.

The bottom line

A vision for the future of generative AI and cloud computing emerged from my interview with Selipsky, highlighting AWS’ pivotal role in this rapidly evolving field. Selipsky’s insights reflect a world where adaptability and innovation are key, particularly in the realms of AI model training, inference and data utilization. 

The industry, navigating through rapid evolution, acknowledges the crucial role of generative AI adaptability and innovation, elements that AWS is demonstrating with the development of cutting-edge technologies such as the Graviton chips and specialty chips such as Trainium and Interentia. AWS is making a big bet on offering a unique three-layer generative AI technology stack that offers a diverse array of AI models and platforms, strategic partnerships, best price-performance technology, and commitment to choice.  

Can AWS continue to deliver the value to the needs of customers, partners, and developers facing diverse challenges and opportunities of the rapidly advancing tech world? Selipsky thinks it can and will.  “We’re very, very bullish on the business long-term,” he said.

Photo: AWS

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU