UPDATED 13:06 EST / DECEMBER 03 2025

AI

Aruba meets Juniper Mist: At Discover, HPE unveils Its unified AI-native network brain

Given Hewlett Packard Enterprise Co.‘s Juniper acquisition has now had a bit of time to percolate, I was expecting to see at HPE’s European version of its user event Discover this week in Barcelona how it’s using the combined assets to reshape itself for the artificial intelligence era.

The event is the first real glimpse into how the Juniper Networks acquisition is taking shape, just five months after the deal closed. In a relatively short period of time, HPE is now merging the Aruba and Juniper platforms, with the combined portfolio rolling out soon to help enterprises prepare their infrastructure for AI.

Here’s a closer look at the key announcements HPE made at Discover Barcelona and how they fit into the company’s broader strategy:

Bringing Aruba and Mist under one AI-native platform

HPE’s immediate focus is on bringing together Aruba Central and Juniper Mist, creating a single AI-native platform for managing enterprise networks. Mist is known for its AI troubleshooting features, whereas Aruba Central provides visibility into the types of devices connecting to the network and how they behave. Those capabilities are now being cross-pollinated using microservices.

For example, Juniper’s Large Experience Model, which analyzes billions of data points from apps such as Zoom and Microsoft Teams, is being added to Aruba Central. Meanwhile, Aruba’s Agentic Mesh technology is coming to Mist, enhancing its ability to detect issues, pinpoint their root cause and take action. For customers, the shift means the two platforms will begin to feel more aligned. HPE described this approach as “build once, deploy twice,” with these shared capabilities rolling out in the first quarter of 2026.

HPE is also beginning to bring the underlying hardware together, starting with new access points that can run on either Aruba or Juniper. The first of these will be Wi-Fi 7 models that work across both platforms, which will make it easier to mix and match hardware without worrying about compatibility. For HPE, it’s a step toward giving customers a more consistent experience.

“Experience is what matters today — the experience of users, the experience of operators,” Rami Rahim, executive vice president and general manager of HPE Networking and former Juniper CEO, said during a prebriefing. “Now with agentic AI, the sky’s the limit. We’re getting into a realm of self-driving capabilities where the network can practically do everything on its own.”

This sets the stage for HPE’s push into agentic AI for IT operations. Unlike traditional AIOps tools, agentic AIOps can reason through network behavior and decide what to do in response. In practice, it means a network that can diagnose an issue, determine what’s causing it, and take steps to correct it on its own. As Rahim put it, the goal is to have self-driving networks that continuously improve the user experience without waiting for manual intervention.

The roadmap outlined by Rahim should alleviate much of the angst customers have had since the acquisition was announced. Both Aruba and Juniper have loyal customer bases and have been concerned that their preferred products may go away in favor of the other companies. In my conversations with HPE and Juniper executives coupled with Rahim’s comments on the analyst call highlight that the companies will eventually bring the products together but will do so in a way that’s no disruptive. Aruba and Juniper customers can continue to use the products like prefer and as they refresh, will arrive at the same point sometime in the future.

New hardware for AI-ready data centers

HPE is pushing deeper into the data center, specifically into the parts of the infrastructure that power AI systems. AI data centers have very different requirements from traditional ones, relying on massive, ultra-efficient Ethernet fabrics to move data between graphics processing units. An underperforming network leads to underutilized GPUs which wastes money and time. To address those unique requirements, HPE introduced two new pieces of hardware.

The first, MX301, is a compact 1.6 terabits-per-second multiservice edge router designed for AI inference moving closer to where data is generated. AI models are being deployed at factory floors, hospitals, retail sites and remote locations. So, organizations need a more efficient way to connect those environments back into larger AI clusters.

“As inference moves closer to where data is, a smaller, more power- and space-efficient MX becomes extremely desirable,” said Rahim. “It packs all of the performance and all of the flexibility that our customers have come to love about the MX in a one-rack unit, power-optimized package, and makes it absolutely ideal as an on-ramp for the distributed inference cluster.”

The second product launch, QFX5250, is a switch built on Broadcom’s Tomahawk 6 silicon. It offers more than 100 Tbps of bandwidth and supports next-generation 1.6 Tbps interfaces, designed for high-speed networks that connect GPU racks inside AI data centers. QFX5250 is the “world’s highest-performance, 100% liquid-cooled ultra Ethernet transport (UET)-ready switch,” said Rahim, positioning it squarely against offerings from Nvidia and Arista.

The MX301 will be available in December, and the QFX5250 will follow in the first quarter of 2026.

Expanding into AI factories with Nvidia and AMD

HPE is adding Juniper MX and PTX routing platforms to Nvidia’s AI factory reference architecture. This gives HPE a way to provide the secure on-ramp for connecting users and devices into an AI factory. It also allows HPE to deliver the long-haul, multi-cloud connectivity for linking AI clusters across different locations. The addition also brings the optical capabilities required to connect private data centers across long distances or stitching together workloads that run across multiple clouds.

“These joint solutions will give our customers the assurance that they need to deploy our routing technology in conjunction with Nvidia’s cutting-edge products with full confidence,” said Rahim.

Additionally, HPE introduced an Ethernet-based scale-up switch for AMD’s new Helios rack, an alternative solution in a space that has traditionally relied on proprietary GPU interconnects like Nvidia’s NVLink. Helios is AMD’s new Open Compute Project ORv3 AI rack design, built with modular trays and liquid cooling for dense, power-constrained environments.

Tackling data readiness for AI workflows

Although networking dominated the announcements, HPE tackled a less-discussed but equally important challenge: data readiness. Enterprises often assume their bottleneck will be GPU capacity, but they often struggle with preparing their data for GPUs, according to Fidelma Russo, HPE’s CTO and executive vice president/general manager of Hybrid Cloud.

HPE launched the X10k Data Intelligence Node, which automatically enriches and structures data. It handles tasks like metadata tagging and vector generation, and it formats everything for retrieval augmented generation, a technique for enhancing the accuracy of generative AI models. The result is less dependence on external data-prep tools and better GPU utilization. HPE expects the X10k Data Intelligence Node to be available in January 2026.

“But storage itself isn’t enough,” said Russo. “AI pipelines don’t just need fast storage, they need fast recovery. So, we are bringing enterprise-grade performance and durability to secondary data, which has traditionally been an afterthought.”

What Russo referred to is HPE’s next update and major overhaul of the StoreOnce platform. HPE unveiled StoreOnce 7700, its first all-flash model designed for fast recovery, cyber forensics, and AI-based anomaly analysis. The second launch is StoreOnce 5720, a hybrid system with more than half a petabyte of usable capacity. With both slated to be available in January, HPE is removing the bottlenecks that slow down AI adoption before model training even begins.

Updates across AI cloud, virtualization and operations

It’s important to note several other key developments HPE shared at Discover Barcelona. HPE expanded its Private Cloud AI platform with support for Nvidia RTX 6000 GPUs and new Nvidia Inference Microservices models for customers who run AI systems in offline, highly regulated environments. The platform now supports GPU fractionalization, essentially dividing a single GPU so multiple users can run workloads at once.

Meanwhile, HPE continues to build out its Morpheus platform, which is being positioned for customers who are evaluating alternatives to their current virtualization setups. HPE is integrating Juniper’s networking and Apstra automation tools into Morpheus. This makes Morpheus easier to operate by automating the network settings that follow workloads as they move across environments.

On the operations side, HPE is updating OpsRamp and GreenLake Intelligence to give IT teams broader visibility across compute, storage, networking, and applications. The additions include new two-way integrations with Apstra, a natural-language Copilot for server troubleshooting, as well as support for the Model Context Protocol (MCP), which allows OpsRamp data to feed third-party AI agents. All of this ties into HPE’s vision of agentic AI for IT.

Taken together, the announcements underscore HPE’s view that building for the AI era requires a new, unified way of thinking. The Juniper acquisition gives HPE a broader portfolio to support that approach.

Final thoughts

HPE’s track record around acquisitions has been spotty, to say the least. The Aruba acquisition worked well because HPE let it run as an independent unit with the broader company. Though this minimized customer disruption, it didn’t create an “HPE” value proposition.

When the Juniper deal was announced, I was expecting it to go down its path and Aruba to continue to its journey. I was pleasantly surprised to see how much traction HPE has made bringing Juniper and Aruba together. Going into 2026, I’ll be watching for more points of integration, particularly between Mist and Central. So far, so good.

Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE

Photo: HPE

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.