Shaping the AI landscape: AWS prioritizes security in GenAI adoption
Since generative artificial intelligence has taken the world by storm through ChatGPT, coming up with ways to harness and enhance this cutting-edge technology is critical.
Based on Amazon Web Services Inc.’s objective of providing enterprises with more choice to meet their needs and preferences, GenAI is part of the company’s product strategy, according to Matt Garman (pictured), senior vice president of sales and marketing at AWS.
“The one thing that I will say is consistent across almost every single customer that wants to use generative AI is that they want to make sure that they do it in a secure, safe environment, where they know that their IP is safe, where they can have explainability,” Garman said. “That’s where our focus is … how can we give enterprises that assurance that they have the highest performing infrastructure but also the best and most secure platform in order to go build that generative AI so that they know that their data and their IP doesn’t leak out to places where they don’t control it.”
Garman spoke with theCUBE industry analyst John Furrier at the Supercloud 3: Security, AI and the Supercloud event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how AWS is accelerating the GenAI narrative.
AI is not new to AWS
Machine learning and AI are cutting-edge technologies that have the potential to revamp businesses and the world at large. As a result, AWS has laid significant emphasis on this sector, according to Garman.
“At Amazon and in AWS, we’ve been super focused on AI and ML and have … been working on this space and have known that this is, has been and will continue to transform how companies do business,” he noted. “We think the approach that we’re taking in AWS is ultimately how most customers are going to want to consume and build generative AI into the applications that they run.”
Given that enterprises want to consume AI in different ways, AWS provides more options because some businesses want to consume it at the package layer whereas others want to consume it at the infrastructure layer. This is attained through AWS product strategy, according to Garman.
“If you think about SageMaker, it’s the development platform of choice of almost every single ML developer out there to do things like make sure that you’re doing safe AI, make sure that you’re testing various different models to see what actually works well with your application,” he noted. “Bedrock provides a really easy-to-use API so the customers can combine those.”
Since choice is important in the enterprise world, Amazon SageMaker plays a fundamental role in this objective. Businesses will also start building on top of foundational models to expand their scope by having their own models, Garman pointed out.
“Our goal is to give customers both the choice to be able to run what’s best for their application, because the model that’s optimized for a financial services customer may not be the one that’s optimized for genomics data,” he said. “Stability AI is a great model for images right now, but not for text. We want customers to be able to pick and choose … the best model that they want to use for the best use case, and that’s part of where SageMaker plays a big role.”
The importance of secure data in GenAI
By using open-source models, AWS ensures that the data used in the buildup is compliant. The use of controls is also helpful in creating secure GenAI models because data breaches are tamed, according to Garman.
“We ensure that data doesn’t leak back into the core foundational model and stays inside of the customer’s VPC,” he explained. “Many of the controls that they use for the rest of their enterprise data work just the same for their generative AI capabilities, and we think when we’ve talked to a lot of customers, they’ve come to trust AWS and our security models. They trust their data inside of AWS.”
ChatGPT has been instrumental in showcasing the power of AI. Nevertheless, enterprises are not throwing caution to the wind, because they are careful not to put their IP into GenAI models for confidentiality purposes, Garman pointed out.
“They’re putting the brakes in so that they have the right controls and security in place so that their own IP doesn’t leak into those models, and I think that’s appropriate,” he added.
GenAI is a powerful technology that has the potential to make businesses more effective and efficient. As a result, independent software vendors are eyeing this field for scalability purposes and the ability to test new capabilities, according to Garman.
“I think if you look at large ISVs — like Adobe just launched last week, new generative AI capabilities inside of their Creative Cloud — really cool stuff that these larger established ISVs are doing and rolling out innovative new technologies and capabilities all based on generative AI,” he said.
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the Supercloud 3: Security, AI and the Supercloud event:
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU