UPDATED 09:00 EST / APRIL 25 2023

AI

Nvidia open-sources software to add guardrails to AI chatbots

Nvidia Corp. announced today that the company is releasing NeMo Guardrails into open source, providing a framework for developers to ensure that generative artificial intelligence chatbots remain accurate and secure for users.

The newly released software comes as many industries are adopting these AI chatbots, powered by large language models, at an increasing pace. These powerful AI engines have many applications, such as answering customer questions, producing software code, generating artwork and more.

Even as these chatbots have become more powerful there have been some pitfalls. Some of the more popular LLMs, such as OpenAI LP’s ChatGPT, which runs on the same model that powers Microsoft Corp.’s Bing AI chatbot, are known to “hallucinate.” That’s where the AI will confidently state entirely false information. Some chatbots can also become erratic and produce unwanted responses depending on the queries put to them, and in other cases malicious users have also sought to use AI to produce malware.

“Safety in generative AI is an industrywide concern,” said Jonathan Cohen, vice president of applied research at Nvidia. “NeMo Guardrails is designed to help users keep this new class of AI-powered applications safe.”

NeMo Guardrails provide developers with an easy solution to provide boundaries for AI chatbots that can control conversations between users and chatbots, including topics, safety and security. The guardrail does this by monitoring the conversation and applying simple English rules set by the developer to make sure that the bot’s responses are appropriate. Developers don’t need to know advanced coding in order to set rules, they can be written in natural language and will be understood by the Guardrails interpreter to apply them to interactions between chatbots and users.

Topical guardrails keep the conversation with the chatbot on topic and maintain the tone of any given conversation. For example, it can make certain that a customer service bot remains in a customer service mode, such as a dentist’s office bot will speak only to services rendered and won’t answer questions such as how much the receptionist makes or engage in conversation that might detract from its original purpose.

Safety guardrails will help reduce hallucinations by enforcing accurate and appropriate information from the chatbot, using its own fact-checking resources from the knowledgebase and making sure that the bot states that it doesn’t know something instead of producing false information. It also guards against unwanted language and toxic behavior by monitoring the user’s prompts and bot replies, thus keeping the chatbot professional.

Security guardrails help prevent what are called “jailbreaks,” when users attempt to get around safety features that prevent the AI from being used to produce dangerous content. They also restrict the AI from doing anything it’s not supposed to do and can connect only with third-party applications that are known to be safe.

As an open-source release, NeMo Guardrails is designed to work with numerous different enterprise AI tools such as LangChain, an open-source toolkit that allows developers to plug their applications into LLMs more easily. It also works with many AI-enabled applications such as the automation platform Zapier Inc.

“Safety, security and trust are the cornerstones of responsible AI development,” said Reid Robinson, lead product manager of AI at Zapier. “We look forward to the good that will come from making AI a dependable and trusted part of the future.”

Guardrails is being incorporated into the Nvidia NeMo framework, which allows developers to build, customize and deploy their own generative AI models using their company’s own proprietary data. Most of the NeMo framework is already available on GitHub as open source and NeMo is available as a service from Nvidia as part of Nvidia’s AI Foundation for enterprise customers.

NeMo Guardrails is the product of years of research by the AI team at Nivida, Cohen said. He explained that initial feedback from developers has been positive and that with the open-source release, the team hopes there will be greater adoption of the framework to build safer and more secure models.

“Our goal is to enable the ecosystem of large language models to evolve in a safe, effective and useful manner,” Cohen said. “It’s difficult to use them if you’re afraid of what they might say. The guardrails system solves that problem.”

Image: geralt/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU