UPDATED 14:33 EST / NOVEMBER 28 2023

CLOUD

New customization and management tools rolled out for AWS’ Bedrock AI service

Amazon Web Services Inc. today introduced new tools that will make it easier to customize the large language models in its public cloud and integrate them into applications.

The tools debuted at the Amazon.com Inc. unit’s AWS re:Invent 2023 conference in Las Vegas. During the event, the cloud giant also introduced new cloud instances that enterprises can use to train and run artificial intelligence models. Meanwhile, a new AI assistant called Amazon Q will help users more quickly perform tasks such as writing code and summarizing lengthy documents.

Customized AI

AWS provides a service called Amazon Bedrock that offers access to a set of managed foundation models. There’s the Amazon Titan series of AWS-developed large language models, or LLMs, as well as neural networks from other companies and the open-source ecosystem. AWS today debuted two new features, fine-tuning and continued pretraining, that will enable customers to customize the LLMs available in Bedrock for specific tasks.

Customizing a neural network involves training it on data not included in its existing knowledge base. For example, an e-commerce company that plans to use a language model to answer customers’ product questions could train the model on product documentation. This customization process can significantly improve the accuracy of an LLM’s answers.

The first new customization feature AWS is rolling out, fine-tuning, allows developers to train supported Bedrock models on labeled datasets. Such datasets contain sample input, most commonly prompts, and prewritten AI answers to those prompts. The records are organized in what is essentially a question-and-answer format to let the AI model being trained learn by example.

The other customization feature AWS introduced this morning, continued pretraining, focuses on a different set of use cases. It allows companies to customize Bedrock LLMs on particularly large datasets, such as code bases compromising billions of tokens. A token is a unit of data that corresponds to a few characters or numbers. Moreover, the feature makes it possible to refresh the training dataset regularly with new information.

AWS will enable customers to carry out continued pretraining using unlabeled datasets. Such datasets contain sample inputs, but don’t necessarily have to include examples of what outputs an AI model should generate in response. Removing the need to create output examples reduces the amount of effort involved in creating training datasets, which lowers AI customization costs.

“You can specify up to 100,000 training data records and usually see positive effects after providing at least 1 billion tokens,” Antje Barth, AWS’ principal developer advocate for generative AI, detailed in a blog post

On launch, the continued pretraining feature is available in public preview for AWS’ Amazon Titan Text LLMs. The fine-tuning capacity, meanwhile, works with not only Titan models but also the open-source Llama 2 and Cohere Command Light models.

Cloud-based AI agents 

AI applications must often carry out tasks that each comprise multiple steps. A customer support chatbot, for example, might be expected to ingest product inquiries, generate a summary of each inquiry and then forward the summary to the relevant business unit. AWS offers a tool called Agents for Amazon Bedrock to ease the task of building AI applications that can perform multistep tasks.

The tool made its original debut in July as a preview feature of Bedrock. At re:Invent today, AWS moved Agents for Amazon Bedrock into general availability and added several enhancements. 

In AI development, an agent is a program that can take a multistep task as input, break it down into individual actions and assign each action to an AI model. The agent generates prompts that instruct the AI model it’s using how to carry out the task. Under the hood, agents are themselves powered by machine learning: developers set them up by providing a natural language summary of what actions will be performed and how.

Agents for Amazon Bedrock eases the process of creating AI agents. According to AWS, the new release of the tool that launched today enables developers to monitor how an agent goes about performing each phase of a multistep task. When necessary, developers can modify how substeps are performed to improve the quality of the output. 

When further customization is needed, a software team may update an agent’s so-called orchestration templates. An orchestration template is an AI prompt that informs an agent what tasks it should perform and how. According to AWS, developers can now customize task explanations as well as other details such as the way AI output should be presented.

“Agents perform best when you allow them to focus on a specific task,” Barth explained. “The clearer the objective (instructions) and the more focused the available set of actions (APIs), the easier it will be for the FM to reason and identify the right steps.”

AI guardrails 

Developers using Bedrock LLMs, customized versions of those models and AI agents now have access to a new feature called Guardrails for Amazon Bedrock. Currently in preview, it’s designed to prevent AI applications from ingesting sensitive data or generating harmful output. 

The feature allows developers to define a set of topics that an AI application should avoid. A bank, for example, could configure its website’s customer support chatbot not to give investment advice. The filter strength can be adjusted through a drag and drop interface.

The second purpose of Guardrails for Amazon Bedrock is to protect sensitive data such as personally identifiable information, or PII. According to AWS, the feature allows AI applications to block users from entering prompts that contain PII. It’s likewise possible to redact sensitive data from AI-generated output. 

Image: AWS

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU