AI
AI
AI
IBM Corp. subsidiary Red Hat today is unveiling a broad set of product and partnership announcements aimed at helping enterprises put artificial intelligence into operation, modernize infrastructure and extend open-source platforms into new environments ranging from software-defined vehicles to computing in space.
The announcements at Red Hat Summit in Atlanta extend Linux and container platforms into specialized environments and give enterprises greater operational control over hybrid cloud infrastructure. The company is also emphasizing new governance, sovereignty and security features as organizations move from experimentation to production AI deployments.
The centerpiece is Red Hat AI 3.4, an updated version of the company’s enterprise AI platform designed to support large-scale inferencing and agentic AI deployments across hybrid cloud environments.
The Red Hat AI strategy is divided into four key pillars, said Joe Fernandes, vice president and general manager of Red Hat AI. “First, helping customers deliver fast, flexible and efficient inference, serving models in their environment,” he said in a pre-event briefing. “Second, connecting their enterprise data to those models and agents. Third, helping them accelerate the deployment and management of agents across a hybrid cloud environment. Fourth, bringing that all together on our integrated AI platform, enabling them to run any model in any agent across any hardware and cloud environment.”
The release adds a new model-as-a-service capability that enables administrators to govern access to AI models through a centralized gateway, track usage and apply policies. Red Hat is also expanding support for distributed inferencing and introducing techniques such as speculative decoding to improve performance and reduce operating costs. Speculative decoding is a large language model inference optimization technique that accelerates text generation up to threefold without reducing output quality.
Fernandes said the company believes that inferencing, rather than model training, will become the dominant enterprise AI workload.
“What’s really going to drive inference demand exponentially is AI agents,” he said. “We provide a platform where customers can deploy and manage their AI agents across a hybrid infrastructure environment.”
The company is also adding agent management and observability features, including tracing for inference calls and tool usage, as well as support for Model Context Protocol gateways and catalogs. Additional features include prompt management, automated evaluation tools and integrated AI safety testing capabilities powered in part by Red Hat’s recent acquisition of Chatterbox Labs Inc.
Fernandes said enterprises are focused less on building foundational models and more on operationalizing existing ones with proprietary enterprise data.
“Pretraining models from scratch is limited to a few very large organizations,” he said. “We find enterprise customers are more focused on consuming those models and then basically connecting them to their own data.”
The AI announcements also deepen Red Hat’s collaboration with Nvidia Corp. That includes support for Nvidia’s Blackwell architecture and upcoming Vera Rubin platform, as well as participation in Nvidia’s OpenShell project for AI agent sandboxing and secure execution.
On the partner front, Red Hat announced a collaboration with Voyager Technologies Inc. to deploy Red Hat Enterprise Linux 10.1 and Universal Base Image on Voyager’s Space Edge micro datacenter aboard the International Space Station.
The project is intended to support in-orbit data processing and AI workloads while extending terrestrial DevSecOps practices into space-based environments.
The companies said the platform addresses the unique constraints of orbital computing environments, including limited power, intermittent connectivity and restricted hardware resources. Red Hat said it does so by using immutable container-native operations, post-quantum cryptography and portable containerized workloads to provide a hardened operating environment for edge processing in orbit.
Red Hat also disclosed a joint engineering initiative with Nissan Motor Co. to develop the automaker’s next-generation software-defined vehicle platform using Red Hat In-Vehicle Operating System.
The collaboration is designed to provide Nissan with a standardized Linux foundation for its future central vehicle computer architecture, while enabling software updates and AI-driven capabilities throughout the vehicle lifecycle.
Nissan executives said the partnership reflects a strategic decision to take greater ownership of the company’s software development stack.
Additional Summit announcements being made this week are expected to focus on Red Hat Enterprise Linux, OpenShift and Ansible automation.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.