UPDATED 09:00 EDT / JUNE 06 2022

AI

Nvidia TAO Toolkit update aims to simplify deep learning model development

Nvidia Corp. is aiming to make life easier for developers of artificial intelligence applications with an update to its TAO Toolkit platform.

TAO Toolkit is a low-code version of Nvidia’s Train, Adapt and Optimize framework that’s designed to simplify and accelerate AI model creation for enterprise applications.

As the company explains, TAO Toolkit makes life significantly easier for application developers. In general, the creation of deep learning models is an extremely complex and time-consuming process that requires the use of large datasets, domain expertise and countless hours. For many companies, this can be cost-prohibitive.

Nvidia said TAO Toolkit, built on the TensorFlow and PyTorch frameworks, accelerates deep learning model training by abstracting away much of the complexity. Developers can leverage transfer learning to create customized deep learning models that are optimized for various industry-specific use cases, including defect detection, language translation, custom voice creation and traffic management. Using TAO Toolkit, it’s possible to develop AI models with much less training data, human domain expertise and time than it would normally take.

The TAO Toolkit notably comes with dozens of pre-trained models that developers can use to build applications. One of the most interesting is OneCup AI, which was used to create an app called Bovine Expert Tracking and Surveillance, or BETSY. As the name suggests, this is a cattle-related application that can track the health, growth, nutrition, activity and phenotypes of livestock using vision AI. BETSY is like an extra pair of eyes for cattle ranchers, Nvidia said, helping them spot any signs of ill health or poor nutrition in their herds.

Other pre-trained models enable developers to apply data gathered from lidar sensors for robotics and automotive applications, and classify human actions based on their poses, which has applications in areas such as public safety, retail and workplace safety. The TAO Toolkit also comes with various speech AI models that can be used to create customized AI voices with just 30 minutes’ worth of recorded data. Those voices can be used to power smart devices, game characters, quick-service restaurant apps and more.

According to Holger Mueller of Constellation Research Inc., the battle for leadership in AI is happening at all levels of the technology stack, as evidenced by today’s announcement.

“It’s Nvidia’s turn to push the envelope with the latest edition of the TAO Toolkit, which is a key simplifier on the software side as it helps developers to create successful AI use cases,” Mueller said. “It’s good to see the low-code nature of this release, plus the foundation on TensorFlow and PyTorch, which are two of the leading AI platforms. The pre-trained models are especially useful, allowing developers to easily create AI that provides real value for enterprises.”

TAO Toolkit isn’t just for pre-trained models, though, as developers can also use it to create their own. In this case, they’ll benefit from new capabilities such as “bring your own model weights,” which makes it possible to fine-tune and optimize non-TAO models with imported, pre-trained weights from ONNX.

“The ability to bring your own model weights is very important,” Mueller said. “This enables the necessary localization of AI models.”

Developers will be able to visualize their models better too, thanks to an integration with TensorBoard. Through this, it’s possible to get a better understanding of training models performance by visualizing scalars such as training and validation loss, model weights and predicted images. Developers can also use TensorBoard to experiment with models by changing various parameters.

Finally, Nvidia said, TAO Toolkit can now be deployed as-a-service with REST application programming interfaces. That means developers can build new AI services or integrate TAO into existing ones with REST APIs, and manage and orchestrate the framework on Kubernetes.

Mueller’s colleague at Constellation Research, Andy Thurai, told SiliconANGLE that while the likes of GPT-3, Microsoft Corp. and Google LLC have all done some work to make AI model pipelines easier, they are still very expensive and time-consuming, and for many companies the business case realization takes too long to make it worthwhile. With TAO Toolkit, he said Nvidia is having a go at creating faster go-to-market AI models, he said.

“While low-level tools such as PyTorch and TensorFlow have given help to data scientists, these new initiatives can move the AI model pipeline to business folks,” Thurai said. “In the TAO toolkit, the combination of pre-trained industry and specific use case models, plus the option to fine tune  models with enterprise data, means that time to market can be much faster and possibly done without the need of many data scientists. Nvidia is a little bit late to the market as GPT-3/Microsoft has waves on this front for many months now.”

Nvidia said the new version of TAO Toolkit will be available with the next quarterly update to its Nvidia AI Enterprise platform.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU