UPDATED 01:46 EST / APRIL 22 2017

CLOUD

Amazon blazes a trail to the next frontier in AI: the cloud

Amazon.com Inc. sometimes doesn’t show up on lists of top leaders in artificial intelligence alongside Google Inc., Microsoft Corp., Facebook Inc. and IBM Corp. That’s about to change.

Amazon Chief Executive Jeff Bezos recently revealed in his annual letter to shareholders that he views machine learning, the branch of AI teaches computers to learn without being explicitly programmed, as key to the future of his company.

In particular, like other AI leaders today, Amazon is focused on deep learning neural networks that aim to emulate in primitive fashion the way the brain learns. Deep learning has led to big advances in speech and image recognition in the last few years, enabling everything from Amazon’s Alexa voice assistant to Google’s self-driving cars.

As Bezos noted, some of Amazon’s work is obvious, such as Alexa, its Prime Air delivery drones and the Amazon Go stores that use machine learning to ditch checkout lines. Other machine learning work is behind the scenes, powering demand forecasting, product recommendations and more, and that’s where Bezos expects it to have the most impact.

‘Watch this space’

The next step is using the Amazon Web Services cloud to spread machine learning to the developer masses, by lowering the cost and friction of using it. Amazon last fall started making its machine learning work accessible to developers via its cloud via new services such as Lex, the guts of Alexa, to create conversational interfaces such as bots, as well as Polly for turning text into speech and Rekognition for image analysis and related tasks.

“Customers are already developing powerful systems ranging everywhere from early disease detection to increasing crop yields,” Bezos said. “Watch this space. Much more to come.”

The Amazon founder isn’t just chasing the latest hot trend. Machine learning services likely will be critical to helping Amazon fend off rivals in the intensifying cloud computing wars that have the likes of Google and Microsoft looking to gain ground on the Seattle online retail giant. Indeed, it’s clear Amazon wants to become a prime supplier of technologies for the coming era of intelligent applications.

“Amazon’s next pillar is likely to be AI,” as important as its Prime free-shipping service and AWS itself, CB Insights said in a new report. “More than ever before, Amazon has aspirations to become a platform company.”

By the reckoning of others such as Gartner, it has a way to go to catch up to Microsoft and Google in cloud machine learning offerings. At the AWS Summit for developers this past week in San Francisco, the company announced new updates and features intended to start remedying that situation.

To delve deeper into Amazon’s machine learning plans, SiliconANGLE spoke to Swami Sivasubramanian, vice president of Amazon AI at Amazon Web Services, at the developer conference. This is an edited version of the conversation:

Q: Tell me about the range of work Amazon is doing in machine learning.

A: There are three layers. Top-layer applications like Lex, Polly and Rekognition are pretrained deep learning models, application programming interfaces catering to application developers who do not want to know anything about deep learning but want to build intelligent applications that can hear, speak or see.

The next layer is API platform services like Amazon Machine Learning and also various parts like EMR [Elastic MapReduce, for analyzing massive amounts of data], catering toward those who want to build their own machine learning models sitting on top of the data in Redshift [AWS’s data warehouse] or relational databases. And the next layer my team does is around deep learning frameworks and machine learning algorithms.

A bunch of scientists on my team are working on core deep learning frameworks. At AWS we are very open about supporting all deep learning frameworks like from Apache MXNet to TensorFlow to Caffe to Theano and more.

Swami Subramanian, head of Amazon AI at AWS (Source: LinkedIn)

Swami Subramanian, head of Amazon AI at AWS (Source: LinkedIn)

Q: Broadly speaking, what are you trying to accomplish here?

A: Our goal is to basically democratize artificial intelligence, to make AI accessible to every developer. To a large extent, even today to build artificial intelligence, it requires in many cases a Ph.D. in machine learning to do a really good job.

We want to enable building new kinds of intelligent applications that can actually do things that humans have been able to do, like being able to see or hear or speak or understand. And we enable businesses and enterprises to make intelligent decisions on top of the data that they have stored in AWS.

Q: Where can we see all that in action?

A: Netflix has built a recommendation engine using deep learning to show customers what they should look at. Pinterest has done that for image recognition. We use machine learning within Amazon for fulfillment and logistics, so when you click an order to buy something, a robot uses computer vision and deep learning to know what to pick and send it over. We also use it for enhancing existing products, for instance X-Ray, which is a cool Amazon Instant Video feature that uses computer vision and deep learning so when you freeze a frame, it tells you who all the actors are in the frame.

We are also using it to create new lines of products. Everybody now knows Alexa. My two-year-old talks to Alexa like it’s a real person in the house. And with Amazon Go, the technology powers some of the checkout-less experience, we can actually see who is walking over to pick up something or drop it off.

Q: Amazon has been more visible lately talking about AI, but Google, Microsoft, Facebook and others seem to get more attention. Are you trying to change that?

A: At Amazon, we tend to be a lot more focused on what matters to customers. … With Amazon Go, for instance, we say, “This is a checkout-less retail experience that helps customers shop faster.” We don’t say, “Hey look at this, this is an awesome deep learning thing, and by the way it can be useful.” Same thing about Alexa. I appreciate it as a scientist myself, but I like it more because my family enjoys talking to Alexa.

That said, Amazon has been investing heavily in machine learning and AI for many years, and we have been very public in the scientific community, making our contributions and being pretty open about it. We have had multiple submissions from Amazon this year, research papers and so forth. In MXNet, we have made 35 percent of contributions in terms of code commits.

Q: What changed to make the deep learning algorithms, which have been around 20 years or more, work so much better today?

A: Three things. Now, we have the ability to store all this data in a cheap fashion without having to pay a huge amount of money to these storage vendors. Second, access to specialized computing. GPUs [graphics processing unit] and FPGAs [field-programmable gate array] chips have unlocked and accelerated these applications. The final aspect is once these things are built and trained, we are making it easier with preconfigured templates to run a distributed training infrastructure scaling to hundreds of GPUs with a single click. The simplicity with which you can now program has been drastically changed thanks to the cloud.

 

What’s next

Q: To what extent is Amazon’s work focused on applying existing technologies as opposed to coming up with new algorithms or techniques?

A: We do fundamental innovation research in many of these areas — speech recognition, natural language understanding, visual understanding. If you wind the clock back a decade ago … we had to sort of push the boundary in deep learning technologies to get the accuracy we want to actually put it in the hands of customers. Like with Alexa, as popular as it is, we had to invent new kinds of algorithms on top of these to get the customer experience we want. Or with Amazon Go, we had to significantly improve the state of the art in deep learning and computer vision.

We also do fundamental research in the core engine here, like deep learning frameworks. We have a team that works on the deep learning engine, working hard to continue to scale the system. Our customers have petabytes of data they want to process — images, video and so forth. Scalability will be one of the key differentiators for years to come as the amount of data you need to process continues to increase.

Q: Can your machine learning models also work at the edge of cloud networks, such as for self-driving cars that can’t afford to wait to call back to the central cloud?

A: We believe that the models that are built for the cloud can also be run at the edge. The deep learning models we built can run in a traditional computer environment or in EC2 [AWS’s Elastic Compute Cloud service] or inside Lambda [AWS’s ad-hoc computing service]. Greengrass [software to allow offline operations and local processing of data on the fly without requiring cloud services] is a great environment to run in devices at the edge. My team ported an MXNet deep learning model that recognizes objects in a table that they could run on a Raspberry Pi camera [with a tiny, cheap computer inside].

The ambition is that there will be a hybrid where some of the deep learning models will run at the edge for quick use cases and some in the cloud for more complex use cases. This is the way Alexa does it. That’s why we see this new hybrid model of deployment might be more interesting in the future.

Q: What’s next in machine learning?

A: My daughter is two years old, and around one or so, she recognized what a tomato is after seeing two tomatoes. She didn’t need a thousand tomatoes to be displayed. That’s exactly why I think deep learning is in its infancy. There are actually techniques that exist today where you can improve the accuracy of the deep learning model with very limited data.

We have been experimenting a lot on these things. And sometimes people don’t need absolute accuracy. Even things like visual search, people are willing to live with less accuracy as long as they’re able to get better coverage.

So there is a lot more to come. If it is Day One at Amazon, here in machine learning we just woke up and haven’t even had a cup of coffee.

 

Photo: AdinaVoicu/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU