Next-gen PCIe should roll out in 2022, says Broadcom
As tech continues to evolve and workplaces expand, system performance becomes even more important. The Peripheral Component Interconnect express standard, known as PCIe, aims to help.
PCIe is a high-speed standard PC slot for connecting components such as solid-state storage cards. The technology is also increasingly being used to connect graphics processors, and the overall package fits particularly well into modern-day digitization themes as enterprise and others require progressively larger amounts of data for business analytics and other tasks.
“Being able to access lots and lots and lots of data locally is going to be a really, really big deal,” said Kimberly Leyenaar (pictured), principal performance architect at Broadcom Inc., which is a leader in PCI Express switching.
Leyenaar spoke with Dave Vellante, host of theCUBE, SiliconANGLE Media’s livestreaming studio, in a CUBE Conversation. They discussed the history of storage interconnects and how the component is rapidly enabling accelerating storage access speeds and capacity.
Gigatransfer rates doubling every few years
Currently, the interconnect standard is in PCIe Gen4. That’s offering a unidirectional performance of a gigatransfer per second rate of 16 GTs. There’s x1 lane bandwidth of 1,970 MBs and x4 lane bandwidth of 7,880 MBs. It’s theoretical, practical performance is actually about 87% less.
The upcoming Gen5 standard — which Leyenaar says will come out in 2022, followed soon after by Gen6 — for comparison, is a GTs rate of 32 GTs; and Gen6 will be at a whopping 64 GTs. Lane bandwidth in those cases escalates equally impressively, with x4 lane bandwidth for Gen6, for example, at 31,500 MBs, according to Broadcom numbers.
In other words, it’s doubling its gigatransfer rate with each generation, which is happening every few years. Where these performance increases will shine can be echoed with what has been seen with what Gen4 has built.
“The reality is the growth of real-time applications that require local processing are going to drive this technology forward over the coming years,” Leyenaar said.
What she is talking about is referred to as edge computing. That is, locally, data-driven workloads that are becoming increasingly used. It includes real-time analytics, but also including telemetry and sensor processing — “all the computing outside of the cloud,” is how Leyenaar explained it. But you can add to that other forms of data-intensive workloads, not edge applications, such as virtualization, machine learning, backups, video streaming and medical imaging, among others.
“To be useful, you have to actually be able to access [the data],” she pointed out — hence, the excitement over storage access gains seen now in Gen4 and upcoming with the future generations in the pipeline for next year.
Why? What are they going to do with it?
The trick is having a balanced infrastructure, according to Leyenaar. For instance, you can’t have performance mismatches between servers and storage. One loses productivity if one part of the computing is more latent than the other, for example.
Leyenaar recounted that when the first Gen3 products were released some years ago, with exponential increases in performance over PCIe Gen2, the marketers at Broadcom asked her: “Hey, how can we show users how they can consume this?”
Of course, the joke being that one doesn’t need to show anyone in this data-driven day-and-age how to gobble storage. The question has turned out to be moot.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s CUBE Conversations.
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU