UPDATED 20:09 EST / NOVEMBER 27 2023

AI

AWS enhances AI services with foundation model capabilities for improved performance

Amazon Web Services Inc. today announced a range of new artificial intelligence service capabilities that are enhanced using foundation models.

Announced at the annual re:Invent 2023 conference, the enhanced capabilities include Amazon Transcribe now offering FM-powered language support and AI-enhanced call analytics, Amazon Personalize now using FM to generate more compiling content, and Amazon Lex now using large language models to provide accurate and conversation responses.

The new FM-enhanced Amazon Transcribe, Amazon’s automatic speech recognition service, delivers what AWS says is significant accuracy improvement between 20% and 50% across most languages. The new ASR system provides differentiating features across all supported languages, now more than 100, related to ease of use, customization, user safety and privacy.

Example features include automatic punctuation, custom vocabulary, automatic language identification, speaker diarization (the process of identifying and separating different speakers in an audio recording), word-level confidence scores and custom vocabulary filters. The support for a large number of languages and value-added feature sets is claimed to empower enterprises to unlock rich insights from their audio content and increase the accessibility and discoverability of their audio and video content across various domains.

Amazon Personalize, Amazon’s machine learning service designed to help developers build personalized recommendations for their customers, now offers hyper-personalization with FMs through a feature called Content Generator.

The new feature uses natural language to create simple and engaging text that describes the thematic connections between recommended items. According to AWS, this enables companies to automatically generate engaging titles or email subject lines to invite customers to click on videos or purchase items.

AWS also now offers Personalize on the open-source LangChain framework to allow customers to build their own FM-based applications. With the integration, users can invoke Amazon Personalize, retrieve recommendations for a campaign or recommender, and feed them into their FM-powered applications within the LangChain ecosystem.

Finally, Amazon Lex, Amazon’s fully managed AI service for building conversational interfaces into any application using voice and text, is also getting FM-powered capabilities to build bots faster and improve containment.

Amazon Lex now offers Conversation FAQ, a new capability that answers frequently asked customer questions intelligently and at scale. Powered by FMs from Amazon Bedrock and approved knowledge sources, CFAQ is said to enable companies to provide accurate, automated responses to common customer inquiries in a natural and engaging way.

CFAQ simplifies bot development by eliminating the need to create intents, sample utterances, slots and prompts manually to handle a wide range of frequently asked questions. It achieves that with a new intent type called QnAIntent that securely connects to knowledge sources like Amazon Bedrock, Amazon OpenSearch Service and Amazon Kendra knowledge bases to retrieve the most relevant information to answer a question.

Image: AWS

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU