UPDATED 12:30 EST / DECEMBER 03 2024

AI

Amazon Bedrock gets better safeguards and ability to orchestrate multiple AI agents

Cloud computing giant Amazon Web Services Inc. is looking to cement Amazon Bedrock’s status as one of the most popular platforms for artificial intelligence developers, enhancing its capabilities in a number of new ways.

At AWS re:Invent today, the company announced multiple updates to the platform that it says will help to prevent AI applications from “hallucinating” and generating false answers. Developers will also be able to orchestrate groups of so-called “AI agents” to perform more complex tasks than before, and create much smaller, task-specific AI models that can almost match the capabilities of powerful large language models, at lower costs.

Safer, more reliable AI models

The company said it’s introducing “Automated Reasoning” checks in preview as a comprehensive new safeguard for AI applications built with Amazon Bedrock, in an effort to combat the increasing prevalence of hallucinations. The fact is that inaccurate responses and other problems, such as bias, simply cannot be tolerated in an age when AI is being given more responsibility than ever before, fielding customer queries and completing work-related tasks to free up employees to focus on higher-level work.

But AI hallucinations remain a big problem even now, causing a serious lack of trust among consumers and enterprises alike.

AWS thinks it may finally be able to resolve this problem with Automated Reasoning, which is a type of AI that relies on math to prove that its responses are correct. The company says it excels when dealing with complex problems that require precise answers, paving the way for AI to be adopted in more situations where reliability is of paramount importance.

These automated checks are the secret sauce within the new and improved guardrails for Amazon Bedrock. With Bedrock’s guardrails, developers can exert more control over their AI models and force them to only talk about topics relevant to their purpose. Now, the guardrails are gaining the ability to validate factual responses for accuracy and show exactly how a model arrived at a particular response. In addition, the models will be able to produce auditable outputs for full transparency, ensuring that everything it says is in line with the customer’s rules and policies.

AWS reckons the new guardrails will be helpful in all sorts of scenarios. For instance, a healthcare insurance organization will be able to check that its customer service bots always respond accurately to customer queries, providing the correct answers to any questions they might have about their insurance policies, for example.

One company already doing this is PricewaterhouseCoopers International Ltd., the professional consultancy firm, which uses Automated Reasoning checks to ensure its various AI assistants and agents are always providing accurate responses.

More useful AI agents

Besides making AI safer, Amazon Bedrock is also making AI more capable with the introduction of “multi-agent collaboration” tools that allow developers to orchestrate dozens of AI agents at once, so they can work together to achieve outcomes.

AI agents are AI applications that are programmed to perform complex tasks on behalf of users. For instance, a customer service chatbot that can process a refund is an AI agent, and so is an AI assistant that can perform data entry tasks when told to do so.

AWS wants to make agents more useful and it thinks the best way to do that is by making them work together. It supports AI agent development through Bedrock’s Amazon Bedrock Agents module, which is now getting some specialized tools for them to share context and dynamically route different tasks to other agents.

The company said multi-agent collaboration in Amazon Bedrock is in preview now and makes it possible to assign different specialized agents to the specific steps involved in a more complex task or project. For instance, a financial services firm looking to carry out due diligence might use one agent to analyze global economic indicators, another to assess industry trends, and a third to review its historic financial records.

With the multi-agent tools, they’ll now be able to create what AWS calls a “supervisor agent” that coordinates these agents to work together on much larger projects, routing each step to the most appropriate one. Each agent will be restricted to accessing only the information it needs to complete the specific task assigned, the company said. The supervisor will then work out what other tasks can be processed in parallel, and which must wait until other tasks have been completed, before coordinating everything to ensure it’s done in the correct order.

The credit rating agency Moody’s Corp. has already been exploring the potential of this, AWS said, using a series of coordinated AI agents to improve its risk analysis workflows, assigning one agent to analyze macroeconomic trends and another to evaluate company-specific risks.

Faster and more efficient AI

The third major new capability being added to Bedrock today is model distillation, which makes it possible to transfer the specific knowledge from powerful LLMs to much smaller, more energy-efficient models that are only focused on one task. The idea is that smaller agents will focus exclusively on that task, enabling it to match or even surpass the performance of the LLM while using a fraction of the energy.

It’s an intriguing idea because AI models require a serious amount of computing power which makes them extremely expensive. LLMs are, in some cases, too powerful for their own good, and their extensive knowledge base can also hinder their performance, as it takes longer for them to respond to some types of queries.

Model distillation changes that. In preview now, it’s basically a technique that allows for knowledge transfer from large LLMs to small language models or SLMs. To support this process, Bedrock also provides tools for the SLMs to manipulate the underlying training data, as well as capabilities for fine-tuning and adjusting the model weights to optimize performance.

AWS reckons that it has made it possible to distill any LLM into an SLM that’s up to 500% faster and 75% cheaper to run. By providing the right sample prompts, the SLM will be almost as capable as the LLM, with the average performance impact rated at just a 2% accuracy loss.

Robin AI Ltd., the creator of a copilot for writing and reviewing legal contracts, said it has used model distillation to create AI assistants that can respond to questions about millions of contractual clauses. It does this at a small fraction of the cost of Robin’s original LLM, responding much faster and without making any mistakes.

Image: SiliconANGLE/Microsoft Designer

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU