

At Red Hat Summit today, the IBM Corp. subsidiary unveiled a suite of tools and strategic updates aimed at productivity, simplifying operations and modernizing infrastructure across hybrid and edge computing environments.
They include the general availability of Red Hat OpenShift Lightspeed, a generative AI assistant for OpenShift users; the tech preview of Red Hat Edge Manager for administering fleets of edge devices and an expanded collaboration with Advanced Micro Devices Inc. focused on artificial intelligence and virtualization performance.
The announcements reflect Red Hat’s strategy to offer a consistent, scalable platform across diverse workloads and platforms, from data center virtual machines to containerized edge applications and AI inference in the cloud. The current focus appears to be less on introducing new products and more on integrating existing technologies in a way that reduces complexity and operational overhead.
Red Hat OpenShift Lightspeed, which is now generally available, integrates generative AI directly into the OpenShift console. The assistant provides step-by-step guidance through natural language queries and delivers context-aware assistance on OpenShift tasks such as troubleshooting and resource investigation.
Red Hat said it designed the tool to work with a variety of AI models, including those from OpenAI LLC, Microsoft Corp.’s Azure OpenAI and IBM’s WatsonX, and also supports private deployment on Red Hat Enterprise Linux AI or OpenShift AI.
Customers can use various application program interfaces and models, including disconnected on-premises setups, for applying AI to information technology tasks, said Mike Barrett, general manager of hybrid platforms at Red Hat.
Other features include a preview capability for cluster-level interaction, allowing the assistant to draw on real-time environmental data for more specific responses. A separate preview, Bring Your Own Knowledge, lets organizations customize the assistant’s responses using internal documentation and operational practices.
Users of Red Hat OpenShift Virtualization can get support for managing virtual machines alongside containerized applications, a feature Red Hat said is aimed at easing transitions from legacy virtualization platforms and supporting hybrid modernization strategies.
Red Hat Edge Manager is used for large-scale fleets of edge devices. Available as a technology preview, it centralizes control over applications and infrastructure across distributed environments such as retail networks or industrial facilities.
“Edge Manager is designed to help manage tens of thousands of devices with policy-based control, even in environments lacking on-site IT expertise,” said Francis Chow, vice president and general manager of edge at Red Hat. “We’re targeting edge scenarios where data is processed outside traditional data centers, on mixed hardware, with the goal of centralized, scalable management.”
Features include policy-based deployment using desired-state configurations, customizable alerting and secure communications using the mutual TLS protocol. A resilient agent architecture enables continued management even in unreliable network conditions.
Designed to support containerized workloads and optimized for Red Hat Enterprise Linux image mode, Edge Manager offers lifecycle tools for application and operating system onboarding, updates and decommissioning.
The expanded partnership with AMD is aimed at optimizing AI workloads and virtualization infrastructure across hybrid cloud environments. Red Hat will integrate AMD’s Instinct graphic processing units and Epyc CPUs with Red Hat’s AI and OpenShift platforms.
OpenShift AI now supports AMD Instinct GPUs, enabling performance tuning for large language models and small language models on Microsoft Azure infrastructure. RHEL AI is also certified to run inference on AMD Instinct MI300X GPUs.
The AMD Epyc processor “is a beast at up to 192 cores,” Barrett said. “This is a a symbol that customers, when they’re choosing their next generation virtualization platform, can go wherever their infrastructure choice leads them.”
The two companies said they are also jointly contributing to the vLLM open source project, which is focused on optimizing inference performance and multi-GPU support. Enhancements include improvements to Triton kernels, FP8 precision support and better collective communication across GPUs. AMD Instinct GPUs will also be supported out of the box in Red Hat’s AI Inference Server. Red Hat OpenShift Virtualization has also been validated for Epyc CPUs, allowing organizations to run virtual machines and containerized applications on the same platform.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.