Red Hat integrates generative AI in OpenShift, RHEL and a host of developer tools
Enterprise software giant Red Hat Inc. delivered a flurry of artificial intelligence-related announcements today at this year’s Red Hat Summit, its annual user conference.
The announcements include an expansion of the Red Hat Lightspeed generative AI platform to Red Hat OpenShift and Red Hat Enterprise Linux, enabling users to interact with those offerings in a more efficient way, using their natural language. The company also introduced an advanced version of Red Hat OpenShift AI, its open hybrid AI and machine learning development platform that makes it easier to build AI-enabled applications, and the addition of generative AI capabilities within Konveyor, an open-source application modernization project.
OpenShift and RHEL get generative AI boost
Red Hat Lightspeed was introduced in the company’s Ansible Automation Platform last year, bringing advanced natural language processing capabilities to the company’s workflow automation platform. Those capabilities will soon be extended to Red Hat OpenShift, which is the company’s main Kubernetes distribution, used by organizations to build containerized, cloud-native applications that can run on any cloud architecture.
The company said OpenShift is commonly used by teams across different business departments, but not every member of those teams is an expert when it comes to building and deploying applications. The integration of Lightspeed, slated to arrive later this year, is designed to enable OpenShift novices to build and develop the skills they require to get more out of the application platform, lowering the barrier to entry. For experts, it’s designed as a “force multiplier,” the company said.
In a nutshell, Red Hat Lightspeed is aimed at making OpenShift much easier to use. For instance, it will make recommendations to users who are deploying new applications, such as advising that autoscaling needs to be enabled, suggesting cloud instances of an appropriate size and more. Once the user’s application has been up and running for a while, it will then assess usage patterns to autoscale down if the capacity requirements are lower than expected.
As for Red Hat Enterprise Linux, Lightspeed will help users deploy, manage and maintain their Linux environments more easily. For instance, users will be able to ask common questions and quickly solve any problems with their deployments. In this way, it can help to simplify enterprise planning and system administration tasks, improve performance, enhance security and otherwise just help users to adapt their Linux environments as and when the situation demands.
Red Hat provided an example, saying RHEL Lightspeed can flag administrators when a fix for a new Common Vulnerability and Exploit has been released. They’ll be able to tell Lightspeed to go ahead with the update, using their natural voice.
However, it will further alert users to which affected machines they’re running in production, so the user knows not to take these offline immediately and perform the update. Instead, they can use Lightspeed to schedule the patch for during the next production maintenance window.
As with OpenShift, Lightspeed is expected to become available in RHEL later in the year.
Lightspeed itself is being enhanced to support more relevant code recommendations and deliver a better overall user experience, the company said. For instance, It can now leverage IBM watsonx Code Assistant to use existing Ansible content to train generative AI models. What’s more, this content can also be enhanced with watsonx code recommendations tailored to each organization’s automation patterns, Red Hat said.
There’s also an enhanced administrative dashboard for Lightspeed that allows admins to dig into how employees are using the platform
Ashesh, Badani, Red Hat’s senior vice president and chief product officer, said today’s updates showcase the company’s commitment to helping its users make the most out of the advances in generative AI. “Red Hat Lightspeed puts production-ready AI into the hands of the users who can deliver the most innovation more quickly: the IT organization,” he said.
New deployment and model serving options for OpenShift AI
Red Hat isn’t only interested in integrating AI within its own platforms, for it also intends to become one of the major players in terms of AI development. To that end, the company unveiled new enhancements to the Red Hat OpenShift AI platform within Red Hat OpenShift, which is used for building cloud native AI-enabled applications.
The new capabilities include model serving at the edge, now available in preview, which allows AI models to be extended to remote edge-based applications that use a single-node version of OpenShift. With this, developers can create applications with inference capabilities in the most resource-constrained environments with either intermittent or air-gapped network access, the company said.
Meanwhile, OpenShift AI users will benefit from enhanced model serving features, allowing them to use multiple model servers that can support both predictive and generative AI. The supported servers include KServe, which is a Kubernetes custom resource definition that orchestrates serving for every type of model, as well as text generation inference server or TGIS serving engines for large language models. According to the company, these enhanced model serving features will enable teams to run predictive and generative AI on a single platform for multiple use cases, meaning lower costs and simplified operations.
Finally, OpenShift AI is making the AI model development process easier with the addition of project workspaces and additional workbench images, giving data scientists the flexibility to use their preferred integrated development environments and toolkits.
AI-powered policy as code
Another new capability announced today is the launch of “automated policy as code” within the Red Hat Ansible Automation platform, which uses AI algorithms to help enforce security and governance policies and maintain compliance across vast hybrid cloud estates, the company said.
According to Red Hat, the new feature is the latest step in automation maturity, making it easier for companies to adhere to changing internal or external requirements and better prepare their information technology infrastructures to support AI at scale.
Generative AI-enhanced app modernization
Red Hat said it’s introducing generative AI capabilities within Konveyor, an open-source project that’s used to modernize legacy applications by rebuilding them as cloud-native apps.
Konveyor essentially provides the foundational technologies for Red Hat’s application migration toolkit, and the introduction of generative AI will help to improve the economics of re-platforming operations, the company said.
Konveyor can now integrate with generative AI models such as IBM watsonx Code Assistant, which will provide coding suggestions throughout the process of rebuilding legacy applications in the cloud, directly in the developer’s integrated development environment, saving considerable time, the company said. In addition, Konveyor will also use retrieval augmented generation techniques to leverage organization’s application migration data, effectively learning how they approach app modernization so as to improve the quality of its code recommendations.
Infusing apps with generative AI
Finally, Red Hat announced a dedicated extension to its Podman Desktop developer experience, called Podman AI Lab, which allows developers to build, test and run generative AI-powered applications within containers on their personal computers and workstations.
Podman AI Lab is said to come with a recipe catalog that eases the process of creating generative AI apps, and includes templates for common use cases such as chatbots that can augment customer support and virtual assistants. Other templates include text summarizers for distilling large amounts of content, code generators to assist developers with application development, object detection capabilities for identifying and locating objects and persons within images and video frames, and audio-to-text transcription to immediately transcribe audio into text.
Sarwar Raza, Red Hat’s vice president and general manager of Application Developer Business Unit, said many traditional application developers have found that there’s a big learning curve when it comes to integrating generative AI. “Podman AI Lab enables them to use familiar tools and environments to apply AI models to their code and workflows in a safer and more security manner, without requiring costly infrastructure investments or extensive AI expertise,” he said.
Image: SiliconANGLE/Microsoft Designer
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU