Accessible AI: With SageMaker, Amazon aims to bring machine learning to more app developers
If you think artificial intelligence and machine learning are beyond the reach of anyone but the most highly trained specialists, think again. Amazon Web Services Inc. is on a mission to make advanced AI solutions accessible to any developer.
“This idea of taking technology that is traditionally only within reach of a very, very small number of well-funded organizations and making it as broadly distributed as possible,” said Matt Wood (pictured), general manager of deep learning and artificial intelligence at AWS. “We’ve done that pretty successfully with compute storage, databases, analytics and data warehousing, and we want to do the exact same thing for machine learning.”
Wood spoke with John Furrier, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the AWS Summit in NYC. They discussed new services introduced for AWS’ SageMaker machine learning service, as well as AWS’ goal of democratizing machine learning. (* Disclosure below.)
Helping more users
AWS divides machine learning users into three levels. The first level, identified by Wood as academics, researchers and data scientists, access open-source programming libraries to build neural networks and artificial intelligence. These are the experts who work with machine learning and AI at the highest levels of complexity.
The next level includes developers and data scientists who want to apply specific aspects of machine-learning to build custom models from their cloud-based data. According to Wood, this is where AWS SageMaker comes into its own, helping developers build, train and deploy machine learning models as fast and easily as possible. “We try and remove as much of the undifferentiated heavy lifting associated with [building custom models] as possible,” he sadi.
The third level is application developers who want to build intelligent apps. This group doesn’t “want to get into the weeds; they just want to get up and running really, really quickly,” he added.
Four new AI services
Helping everyone, at each of these levels, is AWS’ goal, and the company recently announced four new AI services aimed at helping data scientists and developers manage their ever-increasingly complex workloads, according to Wood.
First is the addition of high-throughput batch jobs to SageMaker. Users were previously limited to real-time processing within SageMaker, but there are many cases where customers want to “predict hundreds or thousands or even millions of things all at once,” said Woods, who gave the example of processing sales information at the end of the month.
“You want to … make a forecast for the next month. You don’t need to do that in real time; you need to do it once and then place the order,” he said.
With the new batch transform function in SageMaker, simple APIs allow the client to “pull in all of that data, large amounts of data, batch process it within a fully automated environment, and then spin down the infrastructure and you’re done,” Wood stated. That both makes accessing machine learning easier and decreases costs, he explained.
The second new service is the addition of Pipe Input Mode for the built-in TensorFlow containers. Fast flowing data, and lots of it, is one of the keys to building successful machine learning applications, and with this new service, SageMaker users are no longer constrained by disk space or memory.
“You can just pump terabyte after terabyte after terabyte … so you’ll see between a 10- and 25-percent decrease in training time,” he said. “So that means you can train more models, or you can train more models in the same unit time, or you can just decrease the cost.”
Third is the addition of new languages to the Amazon Translate service: traditional Chinese, Czech, Italian, Japanese, Russian and Turkish. And fourth is the addition of a feature known as channel synthesis to the Amazon Transcribe service. Useful for contact centers where customer service calls are often recorded on a single track, the channel synthesis service is able to recognize voices and then split the recorded audio into two channels, one per speaker. It can also automatically transcribe the conversation, analyze timestamps and create a single script.
“You can check the topics automatically using [AWS] Comprehend, or you can check the compliance: Did the agents say the words that they have to say for compliance reasons at some point during the conversation?” Wood said.
Looking to the future, Wood envisions machine learning becoming one of Amazon’s core strengths. “In the fullness of time, we see that the usage of machine learning could be as big if not bigger than the whole of the rest of AWS combined,” he said.
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the AWS Summit in NYC. (* Disclosure: Amazon Web Services Inc. sponsored this segment of theCUBE. Neither AWS nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU