Unpacking the interplay between generative AI and compute efficiency
There’s tremendous hype currently surrounding generative artificial intelligence, both within veteran and enthusiast circles.
It’s poised to shake up distributed computing, much like the PC, web and mobile phones that preceded it. In turn, Compute.AI is positioning itself to navigate the interplay between AI and compute-driven data management.
“I think we’re at the peak of the hype cycle … and we’re going to go into the trough of disillusionment,” said Joel Inman (pictured, right), chief executive officer of Compute.AI. “We’re going to have a wave for the next 10 years of implementing and adopting enterprise AI in ways that improve our productivity. And I think the McKinsey study proves that we’re going to have $4 trillion a year in extra economic productivity due to AI as we implement it.”
Inman and Vikram Joshi (left), founder, president and chief technology officer of Compute.AI, spoke with theCUBE industry analyst John Furrier at Supercloud 4, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed generative AI, particularly pertaining to the evolving role of compute and the fusion of AI technologies.
The liberation of compute from data
At the heart of the topics is the need to liberate compute from databases and data warehouses, especially given the rising costs involved in running AI workloads on today’s graphics processing units.
“First of all, when we think about AI and its adoption in the enterprise, it is going to drive a thousand times more demand for complex compute,” Inman explained. “And that complex compute is going to be in the form of machine-generated SQL. What that means is we need to get our data story together. We need to come together and figure out how to shore up our infrastructure and drive a lot more efficiency out of our compute.”
The solution, therefore, is a detachment of compute resources from data warehouses, in the same way storage was detached. That way, computing power can be spread across a wider infrastructure scope, Inman added.
Extending the idea further, the industry has to think beyond merely separating storage from compute. It’s more about pulling compute out of databases and data warehouses and making it available “like oxygen” — making it omnipresent given the fact that data is everywhere, according to Joshi.
“The ability to super recruit large numbers of cores and compute without having to think in terms of a database silo,” he said. “We are upon a new future that looks very different. Even if you look at what’s going on with the [business intelligence] applications today: Tableau, Power BI, Looker, they generate at least ten times more SQL than all human-generated SQL. What’s the future? The future is more autonomous sources of SQL generation, more AI/ML-driven dashboards.”
By freeing AI and compute from the constraints of traditional data warehouses, the industry can redefine and supercharge how AI operates at an application level.
Data lakes and the hybrid future
The idea of AI serving as the operating system for the future is steadily gaining popularity. It blends neural networks with large language models, envisioning a scenario where AI is deeply integrated into the computing infrastructure.
It will be crucial to optimize AI applications across different use cases and environments. The idea is to break down data silos and introduce a new era where compute is readily available for AI, irrespective of the application, according to Inman.
“We like to call it abundant compute — the concept is you should be able to breathe it in, your application should just have it available wherever it is needed,” he said. “When we break down data silos, we also have to think about breaking down compute silos. It does us no good if we move the data everywhere — we have a data-centric enterprise, but our compute is still stuck in silos here, there and the other dictated by the applications.”
Data lakes are an inevitable part of the hybrid cloud future, with adoption spanning Fortune 500 companies and beyond. The key is to have a solution that can efficiently operate across these diverse environments, optimizing compute and memory utilization. In the end, access to data and computing resources must be democratized, making it seamless, reliable and cost-effective, Joshi added.
“The final frontier, I think, for data warehouses and databases is concurrency. Data warehouses and concurrency don’t go together,” he noted. “When you start to look at the new kinds of workloads and applications that are going to be coming out and hitting these databases, we are talking about this machine-generated SQL out here, it’s going to be so much in quantity and complexity.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Supercloud 4:
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU