AI networking: Juniper Networks seizes the opportunity to broaden the conversation around AIOps
A decision by Juniper Networks Inc. to build its product offerings around artificial intelligence-native networking has positioned it to have a say in the future of AI infrastructure.
The company’s expansion of AI across its Wi-Fi, switching and data center portfolio of offerings has allowed it to focus on customer deployments that are redefining networking operations in the AI era.
“With AI-native networking, Juniper introduced two concepts: AI for networking, networking for AI, delivering AIOps and AI data center networking to assure the best end-to-end operator and end-user experiences,” said Praveen Jain, senior vice president and general manager of AI clusters and cloud-ready data center at Juniper. “We want to broaden that conversation, engaging with our ecosystem partners, industry leaders and customers to discuss the biggest challenges and opportunities organizations are facing with deploying their AI data centers.”
Jain spoke with theCUBE Research’s John Furrier and Bob Laliberte during the Seize the AI Moment event, an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. His comments were part of a series of interviews and discussions with key executives and partners on how Juniper’s AI-native networking is transforming data centers by managing energy demands, optimizing infrastructure and enhancing network management through AI integration. (* Disclosure below.)
AI networking as core element
Juniper’s AI for networking approach has been fueled by its acquisition of Mist Systems Inc. for $405 million in 2019. Mist uses AI to automate a client’s Wi-Fi infrastructure, and its cloud-based platform helps manage networks.
“AI-native networking means when the networking products are built from the ground up with AI as a core component — it’s not an afterthought,” Jain said. “With our Mist technology, where AIOps was the key focus, we applied the same technology to our data center so that we can simplify operations.”
In networking for AI, Juniper’s partnership with Broadcom Inc. has facilitated an Ethernet-based network infrastructure to power back-end generative AI clusters. Ethernet remains the standard for networking, and Juniper has designed its networking solutions around the technology to drive efficiency.
“What we call networking for AI … is an area where we’ve been investing heavily and seeing some real momentum,” said Juniper’s Chief Executive Officer Rami Rahim. “We’re the first branded networking vendor to launch an 800-gig E-switch based on the Broadcom Tomahawk 5 chip. Our new PTX10002 leverages our own Express 5 silicon to achieve industry-leading performance and power efficiency for the AI data center spine and interconnect.”
Juniper’s embrace of Ethernet highlights a simmering debate around the most suitable connectivity standard for processing of AI workloads. Analysts have noted that adoption of the Ethernet standard is in contrast with the use of InfniBand in other parts of the technology world.
“Through our own testing … we’ve demonstrated that Ethernet performance is on par with InfiniBand for tough AI training use cases, and we’ve shown that Ethernet is far more economical,” Rahim said. “We’ve built innovative new congestion management features for AI into the industry’s most advanced data center fabric management and automation solution.”
Ethernet initiatives with Broadcom
Juniper’s networking for AI approach has been shaped by its ongoing partnership with Broadcom. In an appearance on theCUBE, Broadcom’s Charlie Kawwas (pictured, right), president of the Semiconductor Solutions Group, described a collaboration with Juniper that has spanned nearly 20 years.
“I think our journey in AI has started in the last decade, collaborating with some of the hyperscalers who are the bulk of the market today,” Kawwas said. “Because of them, we see this growth. And it is becoming a huge inflection point, not just for us in tech. This is going to be used in so many other applications, medical, public sector, all kinds of applications we’ll see beyond even video generation and imaging.”
During the event, Kawwas displayed one of Broadcom’s chips, the Tomahawk 5 processor, which uses Remote Direct Memory Access over Converged Ethernet to offload data movement and availability of CPU resources to the application.
“This is actually the fastest Ethernet chip in the world,” Kawwas said. “This is the only chip in the world that actually can deliver 51 terabits per second with a single die. The first advantage is five nanometer, low power. The second advantage is single die versus multiple dies, low power. And there’s the ability to do all of this in a single system with the power efficiency that Juniper brings to the table.”
Building for the data center
Juniper is also building new tools for data center interconnectivity based on its PTX portfolio. The PTX1000 router family supports WAN and data center use cases including edge, core, interconnect and AI data center networking.
“Now your GPU clusters are getting bigger and bigger, and you have to interconnect them because of power, thermal or space capacity,” said AE Natarajan, executive vice president and chief data officer at Juniper. “We were the first to release 200 gig, 400 gig, and right now we’re the first to release 800 gig. This is an innovation where bandwidth hungry, data flow hungry AI clusters are going to actually use that most effectively.”
Juniper’s enhancement to 800 gigabytes for interconnecting data center clusters highlights how IT organizations are increasingly motivated to run AI operations on-premises instead of in cloud-hosted models. One of the reasons for this is cost, according to Manoj Leelanivas (pictured, left), Juniper’s chief operating officer.
“If you look at getting this up and running quickly, go to the hyperscalers, get started with cloud as soon as possible,” Leelanivas said. “Startup cost is very minimal, so it kind of makes sense, but quickly when you scale, the cost can grow exponentially. Cost is a big driver.”
Supporting best-of-breed and Ops4AI
Early AI and machine learning adopters have demonstrated a preference for integrated solutions. This has also led them to gravitate toward open architectures and ecosystems, a pattern that has been seen in the industry before.
“People were very skeptical when Linux was introduced around 1993 because of how it started creating a horizontal platform, “explained Raj Yavatkar, chief technology officer of Juniper. “Chips coming from Intel, platforms coming from someone’s operating systems software, but that led to multiple players being able to build and deliver best-of-breed solutions. A similar thing is likely happening now with the AI and machine learning systems. With the introduction of an open ecosystem where you start providing components from different players … it leads to a better industry outcome.”
Juniper’s customer base relies on the company to help design and provide a network that supports AI implementation. However, AI isn’t simple and the risk of doing a lot wrong can be a significant amount of time and money. To help limit the chance of costly implementation, Juniper announced in July the launch of a multi-vendor lab for validating end-to-end automated AI data center solutions and operations.
“We are coining a term called Ops4AI, which consists of intent-based networking, AI-optimized Ethernet as well as AIOps,” Jain said. “This is AIOps now applied to networking for AI. We have built a lab, and we are calling it Ops4AI Lab, where customers are coming in, training their own model and figuring out that this is the performance. This is what they want to build on-premises when they go back.”
Juniper’s embrace of AI for networking and networking for AI shows how far-reaching AI deployment has become. It has transformed the very infrastructure that runs modern data center and cloud computing environments, yet no one is certain how large this market will become.
“The pace of advancement and growth in the critical ingredients of AI are simply breathtaking,” said Juniper’s Rahim. “We’re seeing massive scale in AI models, in the data that’s being used to train AI, and in the computing infrastructure that makes it all possible. AI has the potential of being as big, if not much bigger, than the internet. The market opportunity is immense.”
Watch all of the “Seize the AI Moment” event content on demand at on theCUBE’s exclusive event site. And stay tuned for ongoing comprehensive coverage from SiliconANGLE and theCUBE. Here’s the complete event video playlist:
(* Disclosure: TheCUBE is a paid media partner for the Seize the AI Moment event. Neither Juniper Networks Inc., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU