How AWS aims to democratize machine learning with cloud services
Amazon Web Services Inc. has made a big bet on artificial intelligence and machine learning, and just how big a bet is likely to become apparent Tuesday when its AI chief presents his keynote at the cloud giant’s re:Invent virtual conference that continues this week.
Swami Sivasubramanian, vice president of Amazon AI, will hold the first-ever re:Invent keynote on the topic, a clear sign that AWS views AI and machine learning as an area ripe for reinvention. AWS Chief Executive Andy Jassy (pictured) told me that the company’s overall aim is to enable machine learning to be embedded into most applications before the decade is out by making it accessible to more than just experts.
“People hire products and services to do a job,” he said. “They don’t really care what you do under the covers.”
In this third of a four-part series, Jassy provided some hints of what Sivasubramanian will cover in his keynotes, as well as the broader picture of how AWS aims to make AI and machine learning a central part of its cloud offerings and how it’s trying to make it easier to use for mere mortals. The interview is lightly edited for clarity.
Look for more strategic and competitive insights from Jassy in my summary and analysis of the interview, as well as in the first part and the second part, and there’s one more installment coming next week, the final week of re:Invent. And check out full re:Invent coverage by SiliconANGLE, its market research sister company Wikibon and its livestreaming studio theCUBE, now in its eighth year covering re:Invent.
Infusing applications with machine learning
Q: Clearly there’s going to be a lot of machine learning AI in the keynotes, and AWS AI chief Swami Sivasubramanian has his own keynote Dec. 8. What’s the opportunity for AWS in machine learning?
A: Last year the machine learning section of my keynote was 75 minutes. So we thought that maybe it’s time for us to break out machine learning. Swami will do a dedicated machine learning keynote where he’ll have a lot of the machine learning goodies in there. We’re both amazed by the pace with which customers are adopting machine learning in AWS.
If you believe like we do that the majority of applications will be infused with machine learning in five to 10 years, we’re still in the very early days. The way we prioritize what we’re working on breaks out into a few customer asks. One is just help our expert machine learning practitioners more easily build what they need. People are comfortable building the models and training them and tuning them and deploying them. And they want increasing performance across every machine learning framework that matters.
Q: Which frameworks matter the most?
A: If you remember a couple of few years ago, I mentioned in our keynote that while TensorFlow was the framework that seemed to resonate with most people at that point, the one constant in machine learning we were seeing was change. And if you fast forward a couple few years from now, and you look at the usage and maybe even more as a leading indicator, the publication of papers that are built on the different machine learning frameworks, PyTorch is used at least as much as TensorFlow, and 90% of the people who do machine learning use at least two frameworks and 60% use more than two.
It’s still very early in what the frameworks are. We have dedicated teams who do nothing but work just on each of those frameworks that matter to customers, to optimize the performance where you’ll see the performance better running on AWS or anywhere else.
And you’ll see some of those numbers in Swami’s keynote. So first is how to make it easier for expert machine learning practitioners. That’s about the frameworks. That’s about the chips. Like I was talking about Inferentia as an example, to help you do inference more cost effectively and quickly. There just aren’t that many expert machine learning practitioners. And so it never gets extensive in most enterprises if you don’t make it easier for everyday developers and data scientists to use machine learning.
Q: That’s where SageMaker comes in, then?
A: That’s why we built SageMaker. We think of that as that middle layer of the stack. And SageMaker has totally changed the game in the ease with which developers can build, train, tune and deploy machine learning models at scale. We’ve got tens of thousands of customers who are standardizing on top of SageMaker. Last year you saw us launch SageMaker studio, which was the first integrated development environment for machine learning. And it just made so many things much easier.
The SageMaker team for the second year in a row launched over 50 features in the last 12 months. So it’s like one a week, but you’ll see at re:Invent a whole host of other capabilities that make it even easier to do some of the hardest things that you have to do in machine learning, and be able to do it right in SageMaker.
Democratizing machine learning
Q: What else do people want AWS to do in machine learning?
A: People ask us a lot … “I don’t want to have to build any models. I just want to send you data, run it through models that you train in AWS… then send back the answers and the predictions through an API.” We have all those top-layer services of the stack: object recognition, video recognition and text to speech, speech to text, translation, OCR, search personalization, forecasting.
And when you’re talking about services like translation or transcription, the ramifications of being wrong are low enough that when you have good results, people just kind of default using those as their main input. But when you have services like facial recognition, or forecasting, or code quality, where the ramifications of being wrong is high, usually what people are doing are using them as an input in one of many inputs in making a decision.
The last thing customers have increasingly said to us, “I think it’s awesome that you have more selection and capability in all three layers that stack, but we just want the job done. We don’t care if you use machine learning. I don’t want to think about machine learning.” As Clay Christensen, author of “The Innovator’s Dilemma,” said … people hire products and services to do a job. They don’t really care what you do under the covers, but they’re hiring a product to do a job.
There are just a lot of these capabilities. You can see them all over the place, with Connect, with things like Contact Lens … and you can see people they don’t actually want to hire machine learning [experts]. They want to have an easier way to get automatic call analytics on all of their calls … but they actually just trying to hire us for the job of doing call analytics. And what we’re finding is that increasingly we’re using machine learning as the source to get those jobs done, but without people necessarily knowing that that’s what we’re doing behind the scenes, they’re just seeing the job getting done.
Q: So now they care less about the feature or speeds and feeds, and care more about getting the job done? Is there anything specifically that you can share that you see that jumps out at you from a services standpoint?
A: A lot of it has to do with who is wanting to consume the services and in what layer. And I’ve always had in AWS a giant number of developers and builders who want all the building blocks, and they want do a lot of stitching together, however they can imagine, and they want that control, they want that flexibility. And especially as more and more people get into software development as a profession, that group of people is going to be large and grow for a long time.
Then you have what I would consider some of the builders who are willing to sacrifice some of that flexibility and some of that control in exchange for getting 80% of the way there faster. And they want different abstractions. CloudFormation is a good example of that when you’re building a data lake: Instead of having to build it with all the controls and pieces … Lake Formation allows you to build a data lake with all of these preset controls and capabilities that allow you get there much faster.
Case study: Machine learning reinvents the call center
Q: What use case would you point to where people just want AWS to handle it all under the covers?
A: Let’s say they’re buying a call center product. They don’t actually care that the IVR [interactive voice response] is being done via machine learning, or that chatbots are being done via machine learning, or that their call analytics under the covers are stored in S3. And then we’re actually indexing and tagging everything. And then we’re doing transcription and natural language processing that allow them to search on a term and say, “Give me all the calls, whether it was negative sentiment.”
They don’t care about any of that stuff. All they care about is, “I want to wait and analyze by calls by basically being able to type in, give me all the calls where there were long silences, or give me all the calls where people are raising their voice. And frankly, I’d like it in real time. If that’s done with very sophisticated machine learning or real-time transcripts, that’s great. But all I really care about is that I get real-time transcriptions of those calls.”
Q: That’s going to turbocharge the vertical applications. We see it happening in every vertical from entertainment, all the way across oil and gas, financial. But you mentioned Connect, AWS’ cloud call center service, multiple times. Is there more to Connect beyond this call center thing?
A: Well, I think the call center piece is pretty big. I mean, virtually every company needs a call center. And I think traditionally people think of a call center as people make calls, people answering the calls, you have to make that functionality work, we probably do have to critically make that work, but there are so many things that you can do to change what these call centers are like.
It’s calls, it’s chat, it’s email and SMS, it’s Slack channels, and it’s a lot of different mediums and channels and people want it to be easy to set up, easy to scale up and down, only pay for the interactions between the agents and customers. And then they want a lot of functionality that makes it much easier to get that work done. Most contact centers still don’t have much visibility into the overall sentiment and feelings of their customers on how they’re doing. And think about agents, if you have to contact somebody, a lot of times you ask the question and the agents don’t have the data in front of them to answer the question. And you’re interacting with that company on multiple dimensions and one hand doesn’t talk to the other. There’s a lot of capabilities that you still need to provide to contact centers.
I think you could imagine a world moving forward where all your contact centers, the agents work from home, they’re using Connect, they’re using Workspaces for their virtual desktop, they’re using all kinds of capabilities that allow them to optimize their time much better, and to get a lot more done for companies, and you can start to think about agents doing more for your business than just answering calls.
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.