UPDATED 15:52 EDT / JULY 25 2024

AI

Nvidia works with Accenture to pioneer custom Llama large language models

Accenture Plc Tuesday announced the launch of the Accenture AI Refinery framework, developed on Nvidia Corp.’s new AI Foundry service. The offering, designed to enable clients to build custom large language models using Llama 3.1 models, enables enterprises to refine and personalize these models with their own data and processes to create domain-specific generative AI solutions.

The generative AI journey to Nvidia AI Foundry

In a briefing, Kari Briski, vice president of AI software at Nvidia, said she’s often asked about the buzz surrounding generative AI.

“It’s been a journey,” she said. “Yes, generative AI has been a big investment. And enterprises ask, ‘Why should we do it? What are the use cases?’ When you think about employee productivity, have you ever wished that you had more hours in the day? I know that I do. Maybe if there were 10 of you, you could get more things done. And that’s what generative AI helps — automate repetitive, mundane tasks, things like summarization, best practices and next steps.”

AI Foundry: a comprehensive infrastructure

“Nvidia AI Foundry is a service that enables enterprises to use accelerated computing and software tools combined with our expertise to create and deploy custom models that can be supercharged for enterprises’ generative AI applications,” Briski said.

The AI Foundry platform offers an infrastructure for developing and deploying custom AI models. It includes:

  • Foundation models: A suite of Nvidia and community models, including Llama 3.1.
  • Accelerated computing: DGX Cloud provides scalable compute resources essential for large-scale AI projects.
  • Expert support: Nvidia AI Enterprise experts assist in the development, fine-tuning and deployment of AI models.
  • Partner ecosystem: Collaborations with partners like Accenture offer consulting services and solutions for AI-driven transformation projects.

Briski said that once companies customizes the model, they must evaluate it. This is where some customers get stuck, she noted. She recounted some of the things she’s heard from customers: “‘How well is my model doing? I just customized it. Is doing the things that I need?’ So the NeMo customers are offered many ways to evaluate, with academic benchmarks, you can upload your own custom evaluation benchmarks, you can connect to a third-party ecosystem of human evaluators, and then you can also use an LLM as a judge.”

Industry adoption

As Briski indicated in the briefing, several companies are using AI Foundry, including Amdocs, Capital One and ServiceNow. According to Nvidia, these three are integrating AI Foundry into their workflows. The company says they’ve gained a competitive edge by developing custom models that incorporate industry-specific knowledge.

The advantages of Nvidia NIM

Nvidia’s NIM has some unique advantages that Briski discussed.

“NIM is a customized model and container accessed by a standard API,” she explained. “And this is the culmination of years of work and research that we’ve done.” She said she has been at Nvidia eight years and the company has been working on it at least that long.

“It’s on a cloud-native stack, it runs out-of-the-box on any GPU,” she said. “That’s across our 100 million-plus installed base of Nvidia GPUs. Once you have NIM, you can customize and add models very quickly.”

She added NIM now supports Llama 3.1, including Llama 3.1 8B NIM (a single GPU LLM), Llama 3.1 70B NIM (for high accuracy generation) and Llama 3.1 405B NIM (for synthetic data generation).

Deploying custom LLMs

In addition, Accenture announced it worked with Nvidia on the AI Refinery framework, which runs on the AI Foundry. Accenture said the framework advances the field of gen AI for enterprises. Integrated within Accenture’s foundation model services, it promises to help businesses develop and deploy custom LLMs tailored to their requirements. According to both companies, the framework includes four key elements:

  • Domain nodel customization and training: This lets enterprises refine LLM models using their own data and processes, enhancing the relevance and value of the models for specific business needs. The customization runs on AI Foundry, which should result in robust and efficient model training.
  • Switchboard Platform: This enables users to select and combine models based on specific business contexts or criteria such as cost and accuracy.
  • Enterprise Cognitive Brain: This component scans and vectorizes corporate data and knowledge, creating an enterprise-wide index that enhances the capabilities of generative AI systems.
  • Agentic architecture: Designed to enable AI systems to operate autonomously, this architecture supports responsible AI behavior with minimal human oversight.

Strategic importance and impact

Accenture’s AI Refinery framework has an opportunity to change enterprise functions, starting with marketing and expanding to other areas. The ability to create and deploy generative AI applications quickly that are tailored to specific business needs underscores Accenture’s commitment to innovation and transformation. By applying the framework internally before offering it to clients, Accenture shows the potential it sees.

Reinventing enterprises

In the announcement, Julie Sweet, chair and chief executive officer of Accenture, highlighted the transformative potential of generative AI in reinventing enterprises. She emphasized the importance of deploying applications powered by custom models to meet business priorities and drive industry-wide innovation.

In addition, Jensen Huang, founder and CEO of Nvidia, noted that Accenture’s AI Refinery would provide the necessary expertise and resources to help businesses create custom Llama LLMs.

Some final thoughts

Accenture’s launch of the AI Refinery framework could be pivotal in adopting and deploying generative AI in enterprises. By employing the Llama 3.1 models, which Briski applauded on the briefing, and the capabilities of the AI Foundry, Accenture enables businesses to create highly customized and effective AI solutions.

As enterprises continue to explore the potential of generative AI, frameworks such as Accenture’s AI Refinery will play a crucial role in enabling customized and effective AI solutions.

The collaboration between Accenture and Nvidia promises to drive further advancements in AI technology, offering businesses avenues for growth and innovation. It also underscores that all AI roads lead to Nvidia.

 Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU