Transforming organizational AI strategies: The rise of MLOps and generative AI in 2024
Within the constantly changing technology sector, two significant trends – machine learning operations and generative artificial intelligence — are poised to transform organizational AI strategies in 2024.
The MLOps market is evolving into AIOps, with the introduction of generative AI creating massive growth and transformative potential for every enterprise, leading to a rapid evolution of the AI stack, according to Alessya Visnjic (pictured), co-founder and chief executive officer of WhyLabs Inc. The power of creating AI applications is now in the hands of any developer, changing the paradigm of who can utilize AI technology.
“I want to start with the big picture, and if we think about the enterprise AI stack and what happened in the last year with the introduction of LLMs, we can see how the AI stack is very rapidly evolving,” Visnjic said. “The model building has completely changed. Previously, there would be a lot of resources needed to build a model. Now, with foundation models … there’s less building and more tuning, and the power of creating AI applications is now in the hands of any developer.”
Visnjic spoke with John Furrier, executive analyst of theCUBE Research, at the “Supercloud 6: AI Innovators” event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the need for organizations to prioritize securing its data within AI to accelerate business decisions.
Organizational AI strategies: Improving security to ensure rapid adoption of ML and AI
Enterprises are rapidly expanding and introducing new products to improve observability and control in its applications, according to Visnjic. The goal is to simplify operations and ensure rapid and safe adoption of machine learning applications. Enterprises seem to be targeting both developer teams and enterprise ops teams, as they believe developers dictate standards, and it’s important to have open-source tools for evaluation.
“It’s no longer kind of closed in and within your data science or machine learning team. It’s becoming a lot more accessible to build AI-powered applications,” Visnjic said. “The deployment part became easier over the past few quarters, because the big cloud companies and infrastructure providers invested an immense amount of money into making deployments as easy as possible.”
The IT organization needs to evolve and become comfortable with AI technology in order to adapt to the ever-evolving software landscape, Visnjic explained. Observability for AI applications is different from application performance monitoring, and the end game is all about software, with AI bringing new capabilities and the question of what the infrastructure will look like.
Moreover, enterprises are grappling with security challenges in LLMs and AI-gen applications, and the focus is on figuring out what to measure and how to measure it consistently, Visnjic added.
“LLMs and gen AI applications open up a whole new set of security challenges that we haven’t solved before,” she said. “Those include how you identify prompt injections, jailbreaks or any kind of adversarial engagement from the user side with your LLM application. I would say the OS top 10 for large language models has been kind of leading the way with the recommendations of what can be tracked.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of the “Supercloud 6: AI Innovators” event:
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU