UPDATED 12:30 EST / JULY 26 2024

TheCUBE panel discusses enterprise AI tools during its Seize the AI Moment event. AI

Seizing the AI moment: Juniper Networks’ vision for the future of enterprise infrastructure

The future of artificial intelligence infrastructure is here, and modern business operations are seeking to capitalize, including with enterprise AI tools. It’s a fast-moving evolution that is expected only to grow in the years to come: By 2026, Gartner Inc. has forecasted that more than 80% of enterprises will have used generative AI APIs or deployed generative AI-enabled applications, which is up significantly from less than 5% in 2023.

Given the trend involving enterprise AI tools, Juniper Networks Inc. is looking toward the future. The company has been adding a number of AI enhancements to its strategy, including its AI-Native Networking Platform and the integration of AI into edge routers.

John Furrier and Bob Laliberte, theCUBE and Praveen Jain of Juniper Networks discussed AI infrastructure during the Seize the AI Moment event.

John Furrier and Bob Laliberte of theCUBE talk with Juniper’s Praveen Jain.

All told, the strategy is all about transforming data centers with AI-native networking. As a part of that, offering solutions is key, according to Praveen Jain, senior vice president and general manager of AI clusters and cloud ready data center at Juniper Networks.

“We want to broaden that conversation, engaging with our ecosystem partners, industry leaders and customers to discuss [the] biggest challenges and opportunities organizations are facing with deploying their AI data centers,” Jain said.

Examining how Juniper’s ecosystem is positioned to respond to today’s challenges involving enterprise AI tools was a key area of interest during this week’s Seize the AI Moment event. Here are just a few of the points of interest that have emerged from the Juniper ecosystem. (* Disclosure below.)

Implementation of enterprise AI tools requires strong balance

No matter the industry, it’s become clear that to implement AI successfully, one must balance infrastructure, data strategy and enterprise AI tools. Some companies, such as PayPal Holdings Inc., have already been using AI and machine learning for some time.

Saikrishna Kotha, head of infrastructure platforms at PayPal, discussed PayPal's AI strategy during Seize the AI Moment 2024.

Saikrishna Kotha talks about PayPal’s focus on AI and security .

“We have 430 million consumers and merchants in the platform, and we process $1.4 trillion in a year,” said Saikrishna Kotha, head of infrastructure platforms at PayPal. “This requires global infrastructure presence to serve our customers and provide PayPal experiences to everybody. This year’s focus has been AI and security, as well as providing deployments of these applications across the board. That includes hybrid multicloud.”

There are examples of enterprise AI tools being used in infrastructure in other industries as well, including the transportation industry. AI is being used for capacity traffic management optimization, according to Alexander Heine, head of IT and data platforms with Deutsche Bahn AG, Germany’s national railway company.

“Recalculating of the best way that a train can go with all restrictions, for example, weather incidents or different kinds of requirements of a track,” Heine said. “Not every train can go to the special tracks. This has a very big data amount that’s needed to [make] an AI prediction.”

When it comes to choosing a way forward, companies can either build their own data centers, consume GPU as a service from the cloud or take a hybrid approach. For PayPal, the strategy is to follow a hybrid multicloud deployment, according to Kotha.

“That is primarily based on our data strategy. For example, the data which is required for research and development, that is currently hosted on-premise,” he said. “We have workloads that are related to this research and development, which are residing on-premise. When it comes to the inference-related, those are deployed in cloud. So, we’re building this infrastructure to support all different kinds of use cases today.”

It’s not easy to operationalize on-prem data centers. But it’s where Juniper believes it can help as companies seek to build data centers for customers.

“From a Juniper [point of view], I think in terms of not only building that ethernet lossless fabric, as well as you have the whole PFC, ETS and the advanced features, forwarding features … within the fabric,” said Sudheesh Subhash, vice-president of innovation and emerging technologies at ePlus Inc.

On top of that, there’s also Juniper Apstra. That’s an “awesome” tool, according to Subhash.

“This absolutely helps the customer, removes a lot of operations pain to automate,” he said.

Evolving AI infrastructure demands efficiency, interoperability

It’s becoming increasingly clear that the evolving AI ecosystem demands efficient, interoperable infrastructure solutions and enterprise AI tools. AI is driving a huge appetite for performance, but everyone is constrained by budgets, space and power, according to Steve Scott, corporate fellow of networking and system architecture at Advanced Micro Devices Inc.

“I’ve heard that over 80% of new data centers that are under construction are already leased,” Scott said. “One of the things that customers are asking us about is, or talking to us about, is just the need to consolidate their general purpose compute to make up space, free up space and power for AI.”

There’s a conception that generative AI requires a massive amount of data, which is definitely true, according to Shimon Ben-David, CEO of WekaIO Inc. But more than that, when one looks at the pipeline that exists in AI and generative AI, many of them are very-storage oriented.

“Definitely, accumulating massive amounts of data. You need a location, a cost-effective capacity environment that can accommodate the protocol of the ingestion of the data, but also the scale of the data ingested,” Ben-David said. “This could be from worldwide fleets of data. This could be data brokers. This could be HPC simulation data created. You need to be able to accommodate for it.”

The future of AI cluster technology

Meanwhile, Juniper believes that AI cluster technology can help to unlock the next stage of AI adoption through collaboration, which can democratize AI infrastructure by driving down costs and accelerating innovation. For MLCommons, the nonprofit entity that’s best known for its AI performance benchmarks, a big focus is working with companies on the cutting edge, including Microsoft Corp. and Meta Platforms Inc., according to David Kanter, one of the founders and executive director of MLCommons.

David Kanter, founder and executive director of MLCommons, discussed the future of AI cluster technology during Seize the AI Moment 2024.

MLCommons’ David Kanter talks about the evolution of AI.

“One of the things that I’ve certainly seen is, of course, there’s the explosive rise of large language models, image generators, other forms of even more computationally demanding machine learning, where we need larger and larger systems to satisfy them,” Kanter said. “For the first time ever, we’re starting to see people talk about inference using the network.”

For a long time, it was just a single node, but now the models are so big that that no longer makes sense. From an organizational standpoint, there’s been a focus on things such as end-to-end inference, according to Kanter.

“How do you incorporate things like RAG, or other augmentation techniques with LLMs and other things? We’ve seen this rise in vector databases; how do we think about that?” he asked.

Networks play a very critical role here, according to Rakesh Kumar, senior distinguished engineer with Juniper. That involves building large-scale infrastructure for large language models, as well as for inference and training.

“This is very important for Juniper, too, because Juniper is also in the forefront of building networking products,” he said. “You want to make sure our products work really well for these kinds of use cases.”

The case for Ethernet

Picture yourself two years ago, talking about building a GPU cluster and AI machine learning. You would be told that nothing other than InfiniBand would work, according to Ram Velaga, senior vice president and general manager of the Core Switching Group at Broadcom Inc.

Ram Velaga, senior vice president and general manager of the Core Switching Group at Broadcom Inc., discusses Ethernet during Seize the AI Moment 2024.

Broadcom’s Ram Velaga discusses Ethernet and InfiniBand during the Seize the AI Moment.

“Everywhere you went, it’s like, ‘Oh, if it’s not InfiniBand, this is not going to work.’ I was sitting there scratching my head saying, ‘That’s not true,’” Velaga said. “Today, when you look at it, seven of the top eight largest clusters in the world are built based on Ethernet. There is one last remaining one that’s built on InfiniBand, but my take is in a year and a half from now that will also be based on Ethernet.”

Recent tests of Ethernet and InfiniBand have determined that Ethernet is, in fact, solid. It was in many cases a very comparable performance to InfiniBand, according to Velaga.

“But with the operational ease and reliability that you expect out of Ethernet,” he said. “There’s more and more benchmarks that have been done across the industry, and that’s why the industry has moved on.”

Watch all of the“Seize the AI Moment” event content on demand at on theCUBE’s exclusive event site. And stay tuned for ongoing comprehensive coverage from SiliconANGLE and theCUBE. Here’s the complete event video playlist:

(* Disclosure: TheCUBE is a paid media partner for the Seize the AI Moment event. Neither Juniper Networks Inc., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU