Google rolls out tools for developers to build machine learning and AI into their products
Google LLC is rolling out a number of tools for developers to build machine learning and artificial intelligence into their applications using high-performance AI models and solutions.
At Google I/O, the company’s annual developer conference, a number of new tools for TensorFlow were announced today. TensorFlow is a free, open-source software library for machine learning and AI that particularly focuses on training and inference of neural networks across many different architectures, from servers to mobile.
Google is adding increased support for AI models, including generative AI and image diffusion models, through TensorFlow for developers so that they can more easily integrate them into their applications using the library. Generative AI has become massively popular recently with the introduction of OpenAI LP’s chatbot ChatGPT, which is capable of holding human seeming conversations, and the art-generating AI Stable Diffusion, capable of creating beautiful and surreal artwork.
Keras, a high-level Python library for interacting with TensorFlow, is getting two updates aimed at making it simpler for developers to add AI capabilities to their apps with just a few lines of code. The first is KerasCV, for computer vision, and the second is KerasNLP, for natural language processing.
Whether a developer wants to call upon a text-generating AI or an image-generating AI, they can use KerasCV or KerasNLP, and with just a few lines of code, they can provide a prompt and receive an output directly in their app. Since these new additions are part of Keras, it has full access to the TensorFlow ecosystem.
Google also updated DTensor, a specialized tool for training AI models at scale that allows for parallel scaling. As AI models get larger, the training gets more difficult because they cannot be trained on a single device and traditionally developers have needed to break them apart, or shard them, across multiple processors, be they graphics processing units or tensor processing units.
With this update, DTensor allows for larger and more performant training and fine-tuning, and it’s on par with industry benchmarks for training large datasets. As a result, developers can be certain that they can get their AI models ready more quickly and efficiently.
As a lot of machine learning work starts with research, Google also made it easier for researchers to get their development into TensorFlow by moving their models from JAX, a powerful framework for transforming numerical functions, into TensorFlow using an application programming interface called JAX2TF. That means researchers who are developing brand new models can continue to do so, and when they’re ready to go to production, they can pipe it through the API and they’re ready to go.
Google is also rolling out a machine learning and AI solution building space called ML Hub. In this hub, developers, engineers and interested parties can define what they want to do and their use cases and Google will provide them with the education, templates, modules and tools to build bespoke AI solutions from Google’s ecosystem.
Google has numerous different tools for getting machine learning and AI into developers apps, but it’s very complex and dispersed, which can make it difficult to discover what a developer might want in order to reach a particular desired outcome.
MediaPipe makes it easy to deploy machine learning on mobile
Not all AI happens on giant server farms. Some models are small enough to run on much more constrained computing devices such as mobile phones, and to make that easier, Google has upgraded MediaPipe.
MediaPipe makes it easier to build, customize and deploy on-device machine learning solutions for portable, edge-based compute, such as those that might operate on a mobile device, desktop or web. By using on-device capabilities, machine learning can do gesture detection, such as watching for hand and face movements, in turn enabling powerful capabilities for devices. It can also be used for many other capabilities such as auto-translation, blurring backgrounds and numerous other uses.
One particular use case for MediaPipe and smaller AI models is how it can be used for accessibility – especially for individuals who don’t have the ability to use their limbs to access devices. To that end, Google developed “Project Gameface,” a computer control interface that uses facial expressions that can control mouse movements in video games to assist disabled gamers.
Google teamed up with Lance Carr, a gamer with a rare form of muscular dystrophy. His house burned down, which destroyed the equipment he normally used to play games such as “World of Warcraft.” Engineers at Google set about using MediaPipe to enable a webcam to control his gaming experience – for example raising eyebrows to click and drag an opening mouth or twitching a lip to one side to move a cursor.
All of this can be done on a single machine, without the need for anything extremely powerful and it restored Carr’s ability to return to gaming and flying across Azeroth once again.
Project Gameface represents only one of the many potential possibilities of portable AI, but it is a very powerful one. “Controlling my computer with funny faces? It’s pretty awesome,” Carr said.
Image: Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU