Red Hat outlines a vision for evolving AI models with open-source communities
Red Hat Inc. today presented its view of generative artificial intelligence, outlining a belief that the currently red-hot technology’s future will depend on open-source software and its ability to enable a community of users to support it.
This view of the AI world could be seen in several key announcements during the company’s Rd Hat Summit conference in Denver this week, including an enhancement of OpenShift AI, its open hybrid development platform, to make it easier for building AI-powered applications. Red Hat also added generative AI capabilities with Konveyor, an open-source-driven cloud-native application project.
“AI won’t be built by a single vendor, it isn’t going to run around a single monolithic model,” Matt Hicks (pictured), president and chief executive officer of Red Hat, said in his keynote address today. “Your choice of where to run AI will be everywhere, and it’s going to be based on open source. Red Hat’s model is that open source unlocks the world’s potential and that’s what’s happening right now.”
Red Hat outlines its contribution to the model development chain
The full panoply of Red Hat’s open-source AI commitment could be seen in its announcement of Red Hat Enterprise Linux AI as a foundation model platform that lets users develop and deploy generative AI models. RHEL AI is an integration of the open-source Granite large language model family from IBM Research, tools based on Large-scale Alignment for chatBots or LAB, and a community driven approach to developing models through its InstructLab project.
In a visual of where Red Hat sees itself in the current open-source model development world, Hicks displayed a slide showing the progression from Meta Platforms Inc.’s Llama to Mistral AI to Hugging Face and now Red Hat as a key contributor to the next evolutionary stage.
“I am very proud to announce that Red Hat is going to add the next link in this chain of open-source contributions,” Hicks said. “The ability to contribute to a model has yet to be solved. Your work can’t really be combined with the person sitting next to you. The barriers of doing a fine tune of Mistral or Llama2 without a background of data science have been too high. We hope to change that.”
Behind Red Hat’s interest in becoming a major player for AI development can be found an equivalent desire to offer the full range of options to its customer base. Its AI strategy is grounded in being a provider of applications such as the generative AI service Lightspeed, LLMs through Granite, platforms as represented by RHEL AI and OpenShift AI, and infrastructure like Ansible Automation.
Ansible made its own news today with the introduction of Policy as Code, a solution for governance and compliance in AI-influenced information technology organizations. The latest Ansible enhancement allows users to check policy enforcement during the development cycle and provides discretionary or mandatory checks before or during automation runs.
“For any organization looking to ride the AI wave, automation is mission-critical,” said Sathish Balakrishnan, vice president and general manager of the Ansible Business Unit. “Ansible made automation consumable.”
New partnerships and hybrid commitments to meet customer expectations
Any organization riding the AI wave is more often than not also looking for help. Mindful of this reality, Red Hat rolled out a string of announcements during the summit designed to reinforce its commitment to collaboration with numerous tech industry leaders.
The news included an alliance with chipmaker Intel Corp. to power OpenShift AI with Intel’s own AI processor products. These include a cloud-hosted version of Intel’s Gaudi AI accelerators, along with processors such as Xeon, Core Ultra and the Arc graphics processing unit.
“Our mission is to bring AI everywhere,” said Intel CEO Pat Gelsinger, appearing in a virtual livestream with Hicks during the keynote. “The future of AI is not just being written, it’s being built.”
In Red Hat’s view of the universe, AI will be built and run in a hybrid model. The company’s continued commitment to the hybrid platform could be seen in its enhancements for OpenShift, Lightspeed, Ansible Automation and even a new multicloud application solution unveiled this week called Connectivity Link.
Hybrid is a key element for AI work, according to Chris Wright, chief technology officer and senior vice president of engineering at Red Hat. Perhaps even more significantly, it is what customers want.
“That hybrid cloud footprint is a fundamental part of doing AI,” Wright said during a briefing for media and analysts following the morning keynotes. “You are not going to do training in the same location as where you do inferencing. Customers are telling us overwhelmingly they are looking for a hybrid cloud solution to manage the AI footprint.”
With the news from the summit this week, Red Hat signaled that it intends to have one foot squarely planted in building AI for its own use and one in creating tools so others can build AI as well. By extending an opportunity for various open-source developers to contribute to models, the company clearly believes that it can play a central role in facilitating a significant next wave of innovation.
“The models themselves clearly can be delivered as open-sourced artifacts,” Wright said. “There’s a lot of common knowledge that we want to share broadly. This is the promise of open source. It fits Red Hat so well.”
Photo: Mark Albertson/SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU