Embracing generative AI in software engineering: trends and innovations
Within the ever-evolving field of software engineering, generative artificial intelligence is swiftly making a mark, offering an unprecedented blend of creativity and automation.
The shift to a generative era in software and computing highlights challenges and potential advancements in AI programming, with a mix of enterprise and consumer focus on AI, according to John Furrier (pictured, right), executive analyst at theCUBE. Companies that transition from being a technology provider to a service provider often get lost in the shift, as they end up competing with other service providers and muddying the waters.
“When you are providing the technology, you are giving it to every service provider so they can build stuff and give it to the builders of these applications, right? That’s the technology provider,” said Sarbjeet Johal (left), technology analyst and go-to-market strategist. “You are working with many service providers; you’re enabling people to do stuff. So if you try to be the service provider, now you are competing with other service providers and then it muddies the waters.”
Furrier spoke with Johal at the Nvidia GTC event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the shift to a generative era in software and computing, with a focus on AI programming, digital twin technology and the future of AI and custom silicon in infrastructure.
The impact of generative AI in software engineering and industry transformation
Nvidia is investing in digital twin technology to revolutionize industries, and next year’s GTC will focus on more practical use cases, Johal explained. Nvidia’s stack is essential for gen AI, and the need for specialized chips to lower power consumption and standards to reduce the cost of doing AI is crucial. Accelerated computing and a more distributed computing approach will lead to more success in AI use cases. The standard for clustered systems has been set, with a complete reconstruction of what a system is, including lower power and cooling requirements.
“I think one of the things that is going to be different … you’re going to see a lot more competition. I think Nvidia has raised the bar … on the generative AI systems approach,” Furrier said. “My big takeaway from this year is two things. One … I think this idea of a monolithic system that’s not like a mainframe but more like distributed computing will pull forward a bunch of use cases for AI to be accelerated faster. The second thing that jumps out at me is that I think this sets the standard for what we’ve been calling clustered systems.”
The need for a large GPU for simple tasks is being questioned, and the switching of computing needs between CPU and GPU will be a major focus in the next few years, according to Johal. The future of AI and custom silicon will revolutionize the infrastructure game with open standards and purpose-built designs for all workloads.
“If you are separating the computer from the physical things, it has to be software-defined. If it’s not, then it’s more like embedded software into that hardware. That’s an old stack sort of paradigm,” Johal said. “The beauty of having software-defined physical assets is that we can get the most juice out of these things. Because most of these things will need power to run and upkeep, we want to shut these things down when we don’t need them.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of the Nvidia GTC event:
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU