UPDATED 14:09 EST / APRIL 25 2022

INFRA

Does hardware (still) matter?

The ascendancy of cloud and software as a service has shone new light on how organizations think about, pay for and value hardware.

Once-sought-after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays and maximizing server utilization have been superseded by demand for cloud architects, DevOps pros and developers with expertise in microservices, container app development and similar skills. Even a company such as Dell Technologies Inc., the largest hardware company in enterprise tech, touts that it has more software engineers than those working in hardware.

It begs the question: Is hardware going the way of COBOL? Well, not likely – software has to run on something. But the labor and skills needed to deploy, troubleshoot and manage hardware infrastructure is shifting quite dramatically.

At the same time, we’ve seen the value flow also changing in hardware. Once a world dominated by Intel Corp.’s x86 processors, value is flowing to alternatives such as Nvidia Corp. and Arm Ltd.-based designs. Moreover, other components such as network interface cards, accelerators and storage controllers are becoming more advanced, integrated and increasingly important.

The question is: Does it matter? If so, why does it matter and to whom? What does it mean to customers, workloads, original equipment manufacturers and the broader society?

In this Breaking Analysis, we try to answer these questions. To do so, we’ve organized a special CUBE Power Panel of industry analysts and experts to address the question: Does hardware (still) matter?

The panelists

Hardware spending momentum in context

The chart below is from Enterprise Technology Research. Each quarter, ETR tracks time series data across its taxonomy of tech sectors to gauge spending velocity within each sector using its proprietary Net Score methodology.

The vertical axis shows Net Score, which represents the net number of customers citing increased spending momentum in the survey of more than 1200 respondents. The horizontal axis shows the pervasiveness within the data set for each sector. We’ve filtered on and plotted a small subset of the taxonomy so that we could isolate relevant hardware sectors shown in the red box. Note the red dotted line on the horizontal axis — anything above that line is considered highly elevated.

For the past several quarters the big four sectors have been artificial intelligence/machine learning, containers, robotic process automation and cloud. Cloud is most impressive due to its elevation on the Y axis and its deep pervasion in the dataset on the X axis. As you can see, the hardware sectors we show have respectable spending momentum in the 20% range but are nowhere near the top sectors.

What follows is a substantial portion of the verbatim transcript from the Power Panel.  

We posed the following question to each analyst/expert:

What’s the No. 1 trend each of you sees in hardware and why does it matter?

Bob O’Donnell: The diversification of chip architectures

So look, I mean hardware is incredibly important. And one comment first I’ll make on that slide is let’s not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It’s just a little bit more stable and it’s not as subject to big jumps as we see, certainly in other software areas. But look, the important thing that’s happening in hardware is the diversification of the types of chip architectures we’re seeing and how and where they’re being deployed. You you refer to this in your opening. We’ve moved from a world of x86 CPUs from Intel and AMD to things like GPUs DPUs. We’ve got computer vision processing, we’ve got dedicated accelerators. We’ve got all kinds of other network acceleration tools and AI powered tools.

There’s an incredible diversification of these chip architectures and that’s been happening for a while, but now we’re seeing them more widely deployed and it’s being done that way because workloads are evolving. The kinds of workloads that we’re seeing in some of these software areas require different types of compute engines than traditionally we’ve had. The other thing is the power requirements based on where geographically that compute happens is also evolving – this whole notion of the edge, which I’m sure we’ll get into a little bit more detail later. It’s driven by the fact that the compute actually sits closer to, in theory the edge and where edge devices are depending on your definition. That changes the power requirements, it changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures. And that’s a very long-term trend that I think we’re going to continue to see play out through this decade and well into the ’20s and ’30s.

Zeus Kerravala: The combination of hardware, software and silicon creates the best experiences

I think the other thing that when you look at this chart to remember too is through the pandemic and the work-from-home period, a lot of companies did put their office modernization projects on hold and you heard that echoed from all the network manufacturers. Companies that had projects underway like network upgrades, they put them on hold. Now that people are starting to come back to the office, they’re looking at that. So we might see some change there, but Bob’s right. The sizes of those markets are quite a bit different. I think the other big trend here is the hardware companies in the areas that I look at, like networking, is it’s a combination of hardware and software and silicon that works together. That creates the optimum type of performance and experience.

Some things are best done in silicon, something like data forwarding and things like that. Historically, when you look at the way network devices were built, you did everything in hardware, you configured the hardware, you did all the data forwarding and did all the management and that’s been decoupled. So more and more of the control elements have been placed in software. A lot of the high-performance things like encryption and, as I mentioned, data forwarding, packet analysis, stuff like that is still done in hardware but not everything is done in hardware. And so it’s a combination of the two, I think, for the people that work with the equipment as well. There’s been more shift to understand how to work with software. I think a mistake the industry made for a while was we had everybody convinced they had to become a programmer. It’s really more a software power user. Can you pull things out of software and through API calls and things like that. But I think the big trend is it’s a combination of hardware, software working together that really make a difference. And how much you invest in hardware or software kind of depends on the performance requirements you have and I’ll talk about that later, but that’s really the big shift that’s happened here. It’s the vendors have figured out how to optimize performance by leveraging the best of all of those.

David Nicholson: Moving from a CPU-centric hardware model to a connectivity-centric world

Just picking up where Bob started off, not only are we seeing the rise of a variety of CPU designs, but I think increasingly the connectivity that’s involved from a hardware perspective, from a server or service design perspective, has become increasingly important. I think we’ll get a chance to look at this in more depth a little bit later, but when you look at what happens on the motherboard, you know we’re not in so much a CPU-centric world anymore. Various application environments have various demands and you can meet them by using a variety of components. And it’s extremely significant when you start looking down at the component level.

It’s really important that you optimize around those components. So I guess my summary would be, I think we are moving out of the CPU-centric hardware model into more of a connectivity-centric model.

Keith Townsend: Infrastructure as code

I’m going to dig deeper into that software-defined data center nature of what’s happening with hardware. Hardware meeting software infrastructure-as-code is a thing. What does that code look like? We’re still trying to figure that out but servicing up these capabilities that the previous analysts have brought up, how do I ensure that I can get the level of services needed for the applications that I need? Whether they’re legacy, traditional data center, workloads, AI/ML, workloads, workloads at the edge. How do I codify that and consume that as a service? And hardware vendors are figuring this out. HPE, the big push into GreenLake as a service. Dell now with Apex taking what we need, these bare-bones components, moving it forward with DDR5, 6 CXL, etc. And surfacing that as code or as services. This is a very tough problem, as we transition from consuming a hardware-based configuration to this infrastructure-as-code paradigm shift.

Marc Staimer: Hardware absolutely matters; you can’t run software on the air

My peers raised really good points. I agree with most of them, but I’m going to disagree with the title of this session, which is, does hardware matter? It absolutely matters. You can’t run software on the air. You can’t run it in an ephemeral cloud, although there’s the technical cloud and that’s a different issue. The cloud is kind of changed everything. And from a market perspective in the 40-plus years I’ve been in this business, I’ve seen this perception that hardware has to go down in price every year. And part of that was driven by Moore’s law. And we’re coming to, let’s say a lag or an end, depending on who you talk to, in Moore’s law. So we’re not doubling our transistors every 18 to 24 months in a chip and as a result of that, there’s been a higher emphasis on software.

From a market perception, there’s no penalty. They don’t put the same pressure on software from the market to reduce the cost every year that they do on hardware, which is kind of “bass ackwards” when you think about it. Hardware costs are fixed. Software costs tend to be very low. It’s kind of a weird thing what we do in the market. And what’s changing is we’re now starting to treat hardware like software from an OPEX versus CapEx perspective. So yes, hardware matters. And we’ll talk about that more in length.

Dave Vellante: You know, I want to follow up on that. And I wonder if you guys have a thought on this. Bob O’Donnell, you and I have talked about this a little bit. Marc, you just pointed out that Moore’s law could be waning. Pat Gelsinger recently at Intel’s investor meeting said that he promised Moore’s law is alive and well. And the point I made in a Breaking Analysis was OK, great. You know, Pat said, doubling transistors every 18 to 24 months, let’s say that Intel can do that. Even though we know it’s waning somewhat. Now, look at the Arm-based M1 Ultra from Apple. In about 15 months increased transistor density on their package by 6X. So to your earlier point, Bob, we have these alternative processors and to David Nicholson’s point, new packaging focused on connecting all the supporting hardware. This is really pointing the way to future designs. Do you have a comment on that, Bob?

Bob O’Donnell: Silicon and interconnect diversity

Yeah, I mean, it’s a great point, Dave. And one thing to bear in mind as well, not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised. The other big point and I think it was Keith that mentioned it. CXL and interconnect on the chip itself is dramatically changing it. And a lot of the more interesting advances that are going to continue to drive Moore’s Law forward in terms of the way we think about performance, if perhaps not number of transistors per se, is the interconnects that become available. You’re seeing the development of chiplets or tiles, people use different names, but the idea is you can have different components being put together eventually in sort of a Lego-block style.

And what that’s also going to give interesting performance possibilities because of the faster interconnect. So you can have shared memory between things which for big workloads like AI, huge data sets can make a huge difference in terms of how you talk to memory over a network connection, for example, but not only that you’re going to see more diversity in the types of solutions that can be built. So we’re going to see even more choices in hardware from a silicon perspective because you’ll be able to piece together different elements. And oh, by the way, the other benefit of that is we’ve reached a point in chip architectures where not everything benefits from being smaller. We’ve been so focused and so obsessed when it comes to Moore’s Law, to the size of each individual transistor and yes, for certain architecture types, CPUs and GPUs in particular, that’s absolutely true, but we’ve already hit the point where things like RF for 5G and WiFi and other wireless technologies and a whole bunch of other things actually don’t get any better with a smaller transistor size.

They actually get worse. So the beauty of these chiplet architectures is you could actually combine different chip manufacturing sizes. You know you hear about four-nanometer and five-nanometer along with 14-nanometer on a single chip, each one optimized for its specific application yet together, they can give you the best of all worlds. And so we’re just at the very beginning of that era, which I think is going to drive a ton of innovation. Again, gets back to my comment about different types of devices located geographically different places at the edge, in the data center, you know, in a private cloud versus a public cloud. All of those things are going to be impacted and there’ll be a lot more options because of this silicon diversity and this interconnect diversity that we’re just starting to see.

A meaningful percentage of customers face hardware procurement challenges

Dave Vellante: I want to introduce some ETR data. I actually want to ask Keith to comment on this before we go on. This next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware.

And you can see the red is they had significant issues and it’s most pronounced in laptops and networking hardware on the far right-hand side, but virtually all categories, firewalls, peripherals, servers, storage are having moderately difficult procurement issues. That’s the sort of pinkish bars or significant challenges – the red bars. So Keith, what are you seeing with your customers in the hardware supply chains and bottlenecks?

Keith Townsend: The price of older switches has doubled, forcing many customers to the cloud

You know, I was just asked this question yesterday and I’m feeling the pain. We have a side project within the CTOAdvisor, we built a hybrid infrastructure, traditional IT data center that we’re working with the traditional customer and modernizing that data center. So it was kind of a snapshot of time in 2016, 2017, 10-gigabit, Arista switches, some older Dell 730 XD switches, you know, speeds and feeds. And we said we would modernize that with the latest Intel stack and connect to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges. I thought we’d easily migrate from 10-gig networking to the 25-gig networking path that customers are going on.

The 10-gig network switches that I bought used are now double the price because you can’t get legacy 10-gig network switches because all the manufacturers are focusing on the more profitable 25-gig for capacity, even the 25-gig switches. And we’re focused on networking right now. It’s hard to procure. We’re talking about nine to 12 months or more lead time. So we’re seeing customers adjust by adopting cloud. But if you remember early on in the pandemic, Microsoft Azure kind of gated customers that didn’t have a capacity agreement. So customers are keeping an eye on that. There’s a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware or some other virtualization technology where it doesn’t matter who can get me the hardware. This is critically impacting projects and timelines.

Has networking hardware become commoditized?

Dave Vellante: That’s a great setup, Zeus, what Keith mentioned earlier the software-defined data center with software-defined networking and cloud. Do you see a day where networking hardware is commoditized and it’s all about the software, or are we there already?

Zeus Kerravala: Customers must think about the tradeoff between agility and performance

No, we’re not there already. And I don’t see that really happening any time in the near future. I do think it’s changed though. And just to be clear, I mean, when you look at that [ETR] data, this is saying customers have had problems procuring the equipment, right? And there’s not a network vendor out there. I’ve talked to Norman Rice at Extreme, and I’ve talked to the folks at Cisco and Arista about this. They all said they could have had blowout quarters had they had the inventory to ship. So it’s not like customers aren’t buying this anymore. Right? I do think, though, when it comes to networking, the network has certainly changed some because there’s a lot more controls as I mentioned before that you can do in software. And I think the customers need to start thinking about the types of hardware they buy and where they’re going to use it and, you know, what its purpose is.

Because I’ve talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it’s bogged down, right? It just doesn’t have the horsepower to run it. And, you know, even when you do that, you have to start thinking of the components you use. The NICs you buy. And I’ve talked to customers that have simply just gone through the process replacing a NIC card and a commodity box and had some performance problems and, you know, things like that. So if agility is more important than performance, then by all means try running software on commodity hardware. I think that works in some cases. If performance, though, is more important, that’s when you need that kind of turnkey hardware system. And I’ve actually seen more and more customers reverting back to that model.

In fact, when you talk to even some startups I think today about when they come to market, they’re delivering things more on appliances because that’s what customers want. And so there’s this kind of app pivot, this pendulum of agility and performance. And if performance absolutely matters, that’s when you do need to buy these kind of turnkey, prebuilt hardware systems. If agility matters more, that’s when you can go more to software, but the underlying hardware still does matter. So I think, will we ever have a day where you can just run it on whatever hardware? Maybe, but I’ll long be retired by that point. So I don’t care.

Dave Vellante: Well, you bring up a good point, Zeus. And I remember the early days of cloud, the narrative was, oh, the cloud vendors… they don’t use EMC storage, they just run on commodity storage. And then of course, lo and behold, you know, AWS trots out James Hamilton to talk about all the custom hardware that they were building. And you saw Google and Microsoft follow suit.

Zeus Kerravala: The key is how much innovation you can drive into the systems

Well, [the industry has] been falling for this forever. Right? And I mean, all the way back to the turn of the century, we were calling for the commoditization of hardware. And it’s never really happened because as long as you can drive innovation into it, customers will always lean toward the innovation cycles because they get more features faster and things. And so the vendors have done a good job of keeping that cycle up, but it’ll be a long time before.

The shift from processor-centric to connect-centric

Dave Vellante: Yeah, and that’s why you see companies like Pure Storage. A storage company has 69% gross margins. All right. I want to go jump ahead. We’re going to bring up the slide four. I want to go back to something that Bob O’Donnell was talking about, the sort of supporting act. The diversity of silicon and we’ve marched to the cadence of Moore’s Law for decades. We asked the question, Is Moore’s Law dead? We say it’s moderating. Dave Nicholson, you want to talk about those supporting components. And you shared with us a slide that speaks to that. You call it a shift from a processor-centric world to a connect-centric world. What do you mean by that? And let’s bring up slide four and you can talk to that.

David Nicholson: A lot of work goes into balancing a system. It’s a tricky game of Whac-A-Mole

Yeah. So first, I want to echo this sentiment that the question, “Does hardware matter?” is sort of the answer is, of course it matters. Maybe the real question should be, should you care about it? And the answer to that is it depends who you are. If you’re an end-user running an application on your mobile device, maybe you don’t care how the architecture is put together. You just care that the service is delivered, but as you back away from that and you get closer and closer to the source, someone needs to care about the hardware and it should matter. Why? Because essentially what hardware is doing is it’s consuming electricity and dollars and the more efficiently you can configure hardware, the more bang you’re going to get for your buck. So it’s not only a quantitative question in terms of how much can you deliver?

But it also ends up being a qualitative change as capabilities allow for things we couldn’t do before, because we just didn’t have the aggregate horsepower to do it. So this chart actually comes out of some performance tests that were done. It happens to be Dell servers with Broadcom components. And the point here was to peel off the top of the server and look at what’s in that server, starting with the PCI interconnect, so PCIe Gen3, Gen4, moving forward. What are the effects on from an interconnect versus on application performance, translating into new orders per minute, orders processed per dollar, et cetera, et cetera?

If you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance, you can see that CPU architecture is sort of lagging behind in a way. And Bob mentioned this idea of tiling and all of the different ways to get around that. When we do performance testing, we can actually peg CPUs, just running the performance tests without any actual database environments working. So right now we’re at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input. So the key thing here, what this is highlighting, is just as a very specific example, you take a card that’s designed as a Gen3 PCIe device, and you plug it into a Gen4 slot.

Now the card is the bottleneck. You plug a Gen4 card into a Gen4 slot. Now the Gen4 slot is the bottleneck. So we’re constantly chasing these bottlenecks. Someone has to be focused on that from an architectural perspective, it’s critically important. So there’s no question that it matters. But of course, various people in this food chain won’t care where it comes from. I guess a good analogy might be, where does our food come from? If I get a steak, it’s a pink thing wrapped in plastic, right? Well, there are a lot of inputs that a lot of people have to care about to get that to me. Do I care about all of those things? No. Are they important? They’re critically important.

Dave Vellante: OK, I want to get to what does this all mean to customers? What I’m hearing from you is to balance a system it’s becoming, more complicated. I’ve been waiting for this day for a long time, because as we all know the bottleneck was always the spinning disk, the last mechanical device in systems. So people who wrote software knew that when they were doing a write, the disk had to go and do stuff. And so programmers designed systems with that in mind. And now with all these new high-performance interconnects and flash storage and the ability to bypass chatty interfaces and do things like atomic writes…. And so that opens up new software possibilities and combine that with alternative processes and modern supporting components.

What’s the “so what?” on this to the customer and the application impact? Can anybody address that?

Marc Staimer: You have this interconnect problem and that goes beyond the chip. Moving data is the biggest gate to performance

Yeah, let me address that for a moment. I want to leverage some of the things that Bob said, Keith said, Zeus said, and David said, yeah. So I’m a bit of a contrarian in some of this. For example, on the chip side. As the chips get smaller, 14-nanometer, 10-nanometer, five-nanometer, soon three-nanometer, we talk about more cores, but the biggest problem on the chip is the interconnect from the chip because the wires get smaller. People don’t realize in 2004 the latency on those wires in the chips was 80 picoseconds. Today it’s 1,300 picoseconds. That’s on the chip. This is why they’re not getting faster. So we maybe getting a little bit slowing down in Moore’s Law. But even as we kind of conquer that, you still have the interconnect problem and the interconnect problem goes beyond the chip.

It goes within the system, composable architectures. It goes to the point where Keith made, ultimately you need a hybrid because what we’re seeing, what I’m seeing and I’m talking to customers, the biggest issue they have is moving data. Whether it be in a chip, in a system, in a data center, between data centers, moving data is now the biggest gating item in performance. So if you want to move it from, let’s say your transactional database to your machine learning, it’s the bottleneck, it’s moving the data. And so when you look at it from a distributed environment, now you’ve got to move the compute to the data. The only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute, the software, running on hardware closer to the data.

Dave Vellante: So is this what you mean when Nicholson was talking about a shift from a processor-centric world to a connectivity-centric world? You’re talking about moving the bits across all the different components.

Marc Staimer: Speed-of-light latency remains a key bottleneck..

Well, that’s one of them and there’s a lot of different bottlenecks, but it’s the data movement itself. It’s moving away from, wait, why do we need to move the data? Can we move the compute, the processing closer to the data? Because if we keep them separate and this has been a trend now where people are moving processing away from it — it’s like the edge. I think it was Zeus or David. You were talking about the edge earlier. As you look at the edge, who defines the edge, right? Is the edge a closet or is it a sensor? If it’s a sensor, how do you do AI at the edge? When you don’t have enough power, you don’t have enough compute. People were inventing chips to do that. To do all that at the edge, to do AI within the sensor, instead of moving the data to a data center or a cloud to do the processing.

Because the lag in latency is always limited by speed of light. How fast can you move the electrons? And all this interconnecting, all the processing, and all the improvement we’re seeing in the PCIe bus from three, to four, to five, to CXL, to a higher bandwidth on the network. And that’s all great but none of that deals with the speed of light latency.

David Nicholson: Making data movement more efficient at the micro level is what’s happening

You know Marc, you’re looking at this from a macro level, which I think is what you’re describing. You can also look at it at a more micro level from a systems design perspective, right? I’m going to be the resident knuckle-dragging hardware guy on the panel today. But it’s exactly right. Moving compute closer to data includes concepts like peripheral cards that have built in intelligence, right? So again, in some of this testing that I’m referring to, we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for things like the I/O. Now you have essentially offload engines in the form of storage controllers, RAID controllers, of course, for Ethernet NICs, smart NICs.

And so when you can have these sort of offload engines and we’ve gone through these waves over time. People think, well, wait a minute, RAID controller and NVMe? You know, flash storage devices. Does that make sense? It turns out it does. Why? Because you’re actually at a micro level doing exactly what you’re referring to. You’re bringing compute closer to the data. Now, closer to the data meaning closer to the data storage subsystem. It doesn’t solve the macro issue that you’re referring to, but it is important. Again, going back to this idea of system design optimization, always chasing the bottleneck, plugging the holes. Someone needs to do that in this value chain in order to get the best value for every kilowatt-hour of power and every dollar.

Zeus Kerravala: New architectural designs often require partnering

Well, this whole drive performance has created some really interesting architectural designs, right? Like Nicholson was saying, I think of the rise of the DPU, right? Brings more processing power into systems that already had a lot of processing power. There’s also been some really interesting innovation in the area of systems architecture too. If you look at the way Nvidia goes to market, their drive kit is a prebuilt piece of hardware, optimized for self-driving cars, right? They partnered with Pure Storage and Arista to build that AI-ready infrastructure. I remember when I talked to Charlie Giancarlo, the CEO of Pure, about this when the three companies rolled that out. He said, “Look, if you’re going to do AI, you need fast storage, fast processor and fast network.” And so for customers to be able to put that together themselves was very, very difficult.

There’s a lot of software that needs tuning as well. So the three companies partner together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it. In that case, in some ways the hardware was leading the software innovation. And so, the variety of different architectures we have today around hardware has really exploded. I think part of it is what Bob brought up at the beginning about the different chip design.

Dave Vellante: Bob talked about that earlier. Bob, I mean, most AI today is modeling, you know, and a lot of that’s done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge. And that’s a radically different architecture, Bob, isn’t it?

Bob O’Donnell: This all boils down to systems architecture at every level… and hardware matters more than ever

It is, it’s a completely different architecture. And just to follow up on a couple points, excellent conversation guys. Dave [Nicholson] talked about system architecture and really that’s what this boils down to, right? But it’s looking at architecture at every level. I was talking about the individual different components the new interconnect methods. There’s this new thing called UCIE universal connection. I forget what it stands answer for exactly, but it’s a mechanism for doing chiplet architectures, but then again, you have to take it up to the system level, because it’s all fine and good if you have this SOC [system on chip] that’s tuned and optimized, but it has to talk to the rest of the system. And that’s where you see other issues. And you’ve seen things like CXL and other interconnect standards, you know, and nobody likes to talk about interconnect because it’s really wonky and really technical and not that sexy, but at the end of the day it’s incredibly important.

To the other points that were being raised like Marc raised, for example, about getting that compute closer to where the data is and that’s where again, a diversity of chip architectures help and exactly to your last comment there Dave, putting that ability in an edge device is really at the cutting edge of what we’re seeing on a semiconductor design — the ability to, for example, maybe it’s an FPGA, maybe it’s a dedicated AI chip. It’s another kind of chip architecture that’s being created to do that inferencing on the edge. Because again, it’s that the cost and the challenges of moving lots of data, whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters. And the other thing is we’re tackling bigger problems.

So architecturally, not even just architecturally within a system, but when we think about DPUs and the sort of the east-west data center movement conversation that we hear Nvidia and others talk to, it’s about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data. So really it’s about tackling where the processing is needed, having the interconnect and the ability to get the data where you need it, to the right place at the right time. And because those needs are diversifying, we’re just going to continue to see an explosion of different choices and options, which is going to make hardware even more essential, I would argue, than it is today. So I think what we’re going to see not only does hardware matter, it’s going to matter even more in the future than it does now.

How has the demand for hardware management skills changed in IT?

Dave Vellante: Great discussion, guys. I want to bring Keith back into the conversation here. Keith, if your main expertise in tech is provisioning LUNs, you probably want to look for another job. So maybe clearly hardware matters, but with software-defined everything, do people with hardware expertise matter outside of, for instance, component manufacturers or cloud companies? I mean, VMware certainly changed the dynamic in servers. Dell just spun off its most profitable asset in VMware. So it obviously thinks hardware can stand alone. How does an enterprise architect view the shift to software-defined, hyperscale cloud and how do you see the shifting demand for skills in enterprise IT?

Keith Townsend: Skills are shifting to where value lies in making data available and recoverable

I’ll take a different view of it. If you’re a data analyst and your primary value-add is that you do ETL transformation — I talked to a CDO, a chief data officer, at a midsize bank a little bit ago. He said 80% of his data scientists’ time is done on ETL. Super-not-value-add. He wants his data scientists to do data science work. Chances are if your only value is that you do LUN provisioning, then you probably don’t have a job now. The technologies have gotten much more intelligent. As infrastructure pros, we want to give infrastructure pros the opportunities to shine and I think the software-defined nature and the automation that we’re seeing vendors undertake, whether it’s Dell, HP, Lenovo take your pick…. Pure Storage, NetApp that are doing the automation and the ML needed so that these practitioners don’t spend 80% of their time doing LUN provisioning and focusing on their true expertise, which is ensuring that data is stored and recoverable.

Data is retrievable, data’s protected, et cetera. I think the shift is to focus on that part of the job that you’re ensuring no matter where the data’s at, because as my data is spread across the enterprise hybrid different types, you know, Dave, you talk about the super cloud a lot. If my data is in the super cloud, protecting that data and securing that data becomes much more complicated when than when it was me just procuring or provisioning LUNs. So when you say, where is the shift, it’s focusing on the real value, which is making sure that customers can access data, can recover data, can get data at performance levels that they need within the price point they need to get at those datasets to where they need it.

One last point about this interconnecting. I have this vision, and I think we all do, of composable infrastructure. This idea that scaled out does not solve every problem. The cloud can give me infinite scale-out. Sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances. That single OS does not exist today. And the opportunity is to create composable infrastructure so that we solve a lot of these problems that just simply don’t scale out.

Dave Vellante: You know, wow. So many interesting points there. I had just interviewed Zhamak Dehghani, who’s the founder of data mesh, last week. And she made a really interesting point. She said, “Think about, we have separate stacks. We have an application stack and we have a data pipeline stack and the transaction systems, the transaction database, we extract data from that,” to your point, “We ETL it in, you know, it takes forever. And then we have this separate sort of data stack. If we’re going to inject more intelligence and data and AI into applications, those two stacks,” her contention is “they have to come together.” And when you think about, supercloud bringing compute to data, that was what Hadoop was supposed to be. It ended up all sort of going into a central location, the cloud, but it’s almost a rhetorical question.

I mean, it seems that that necessitates new thinking around hardware architectures as more data moves to the edge. And to your point, Keith, it’s really hard to secure that data. So when you can think about offloads, you’ve heard the stats, you know, Nvidia talks about it. Broadcom talks about it that, 25% to 30% of the CPU cycles are wasted on doing things like storage offloads, or networking or security offloads. Zeus, you may have a comment on this. It seems like new architectures need to come to support what Keith and I just spewed.

Zeus Kerravala: The edge brings in distributed compute models and that’s a major change in architecture

Yeah, and by the way, Keith, it’s the point I made at the beginning too about engineers do need to be more software-centric, right? They do need to have better software skills. In fact, I remember talking to Cisco about this last year when they surveyed their engineer base, only about a third of ’em had ever made an API call, which you know that that kind of shows this big skillset change that has to come. But on the point of architectures, I think the big change here is edge because it brings in distributed compute models. Historically, when you think about compute, even with multicloud, we never really had multicloud. We’d use multiple centralized clouds, but compute was always centralized, right? It was in a branch office, in a data center, in a cloud. With edge what we create is the rise of distributed computing where we’ll have an application that actually accesses different resources and at different edge locations.

And I think Marc, you were talking about this, like the edge could be in your IoT device. It could be your campus edge. It could be cellular edge, it could be your car, right? And so we need to start thinking about how our applications interact with all those different parts of that edge ecosystem, to create a single experience. The consumer apps, a lot of consumer apps, largely work that way. If you think of like Uber, right? It pulls in information from all kinds of different edge application, edge services. And, you know, it creates a pretty cool experience. We’re just starting to get to that point in the business world now. There’s a lot of security implications and things like that, but I do think it drives more architectural decisions to be made about how I deploy what data where and where I do my processing, where I do my AI and things like that. It actually makes the world more complicated. In some ways we can do so much more with it, but I think it does drive us more toward turnkey systems, at least initially, in order to ensure performance and security.

The integration of hardware and software

Dave Vellante: Marc, I want to go to you. You had indicated to me that you wanted to chat about this. You’ve written quite a bit about the integration of hardware and software. You know, we’ve watched Oracle’s move from buying Sun and then basically using that in a highly differentiated approach. Engineered systems. What’s your take on all that? I know you also have some thoughts on the shift from CapEx to OpEx, so chime in on that.

Marc Staimer: If you own the full stack, there are things you can do that drive competitive advantage

Sure. When you look at it, there are advantages to having one vendor who has the software and hardware. They can synergistically make them work together that you can’t do in a commodity basis. If you own the software and somebody else has the hardware — an example would be Oracle. As you talked about with their Exadata platform, they literally are leveraging microcode in the Intel chips. And now in AMD chips and all the way down to Optane, they make basically AMD database servers work with Optane memory in their storage systems, not NVMe, SSD, etc. I’m talking about the cards themselves. So there are advantages… you can take leverage if you own the stack, as you were putting out earlier, Dave, of both the software and the hardware.

OK, that’s great. But on the other side of that, that tends to give you better performance, but it tends to cost a little more. On the commodity side it costs less, but you get less performance. What Zeus had said earlier, it depends where you’re running your application. How much performance do you need? What kind of performance do you need? One of the issues about moving to the edge is what kind of processing do you need? If you’re running in a CCTV camera on top of a traffic light, how much power do you have? How much cooling do you have that you can run this? And more importantly, do you have to take the data you’re getting and move it somewhere else and get it processed and then send the information back? I mean, there are companies out there like Brain that have developed AI chips that can run on the sensor without a CPU. Without any additional memory.

So, I mean, there’s innovation going on to deal with this question of data movement. There’s companies out there like Tachyon that are combining GPUs, CPUs and DPUs in a single chip. Think of it as super-composable architecture. They’re looking at being able to do more in less.

A Dell VMware thought exercise

Dave Vellante: I just wanted to pick up on something you said about the integrated hardware software. I mean, other than the fact that, you know, Michael Dell unlocked, whatever, $40 billion for himself and Silver Lake, I was always a fan of a spin-in, where VMware basically become the Oracle of hardware. Now I know it would’ve been a nightmare for the ecosystem and culturally, they probably would’ve had a VMware brain drain, but does anybody have any thoughts on that as a sort of a thought exercise? I was always a fan of that on paper.

Keith Townsend: “I never liked Dell owning VMware… but customers loved it” 

I got to eat a little crow. I did not like the Dell VMware acquisition for the industry in general. And I think it hurt the industry in general. HPE, Cisco walked away a little bit from that VMware relationship. But when I talked to customers, they loved it. You know, I got to be honest. They absolutely loved the integration. The VxRail, VxRack solution exploded. Nutanix became kind of an afterthought when it came to competing. So that spin-in, when we talk about the ability to innovate and the ability to create solutions that you just simply can’t create because you don’t have the full stack. Dell was well-positioned to do that with a potential spin-in of VMware.

Zeus Kerravala: VMware gave Dell certain advantages they could have leveraged even more

Yeah, in fact, I think you’re right, Keith, it was terrible for the industry. Great for Dell. And I remember talking to Chad Sakac when he was running, you know, VCE, which became Rack and Rail, their ability to stay in lockstep with what VMware was doing. What was the No. 1 workload running on hyperconverged forever? It was VMware. So their ability to remain in lockstep with VMware gave them a huge competitive advantage. And Dell came out of nowhere in, you know, the hyperconverged market and just started taking share because of that relationship. So, you know, from a Dell perspective I thought it gave them a pretty big advantage that they didn’t really exploit across their other properties, right?

Networking and service and things like they could have given the dominance that VMware had. From an industry perspective, though, I do think it’s better to have them decoupled.

Dave Vellante: I agree. I mean, I think they could have dominated in supercloud and maybe they would become the next Oracle [of hardware] where everybody hates ’em, but they kick ass.

We got to wrap up here. And so what I’m going to ask you is I’m going to go and reverse the order this time, you know, big takeaways from this conversation today, any final thoughts, any research that you’re working on that you want highlight or you know, what you look for in the future? Try to keep it brief. We’ll go in reverse order.

Key takeaways from the power panel

Marc Staimer: 1) OpEx versus CapEx: The cloud changed the market’s perception of hardware; 2) Data movement remains the key bottleneck

Sure, on the research front, I’m working on a total cost of ownership of an integrated database analytics machine learning versus separate services. On the other aspect that I wanted to chat about real quickly, OpEx versus CapEx, the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software. As you use it, pay for what you use in arrears. The good thing about that is you’re only paying for what you use, period. You’re not paying for what you don’t use. I mean, it’s compute time, everything else. The bad side about that is you have no predictability in your bill. It’s elastic, but every user I’ve talked to says every month it’s different. And from a budgeting perspective, it’s very hard to set up your budget year-to-year and it’s causing a lot of nightmares.

So it’s just something to be aware of. From a CapEx perspective, you have no more CapEx if you’re using that kind of base system, but you lose a certain amount of control as well. So ultimately that’s some of the issues. But my biggest point, my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about, Keith, or other aspects moving it between hybrid locations, moving it within a system, moving it within a chip. All those are key issues.

Keith Townsend: Building the real-world experience of a hybrid data center

Again, I’m going to point back to us taking the walk that our customers are taking, which is trying to do this conversion of all primary data center to a hybrid of which I have this hard-earned philosophy that enterprise IT is additive. When we add a service, we rarely subtract a service. So the landscape and service area that we support has to grow. So our research focuses on taking that walk. We are taking a monolithic application, decomposing that to containers, and putting that in a public cloud, and connecting that back to a private data center and telling that story and walking that walk with our customers.

David Nicholson: Under the hood of the car, looking closely at the various components that contributed to a balanced system

You know, it really hearkens back to the beginning of the conversation. You talked about momentum in the direction of cloud. I’m sort of spending my time under the hood, getting grease under my fingernails, focusing on where still the lion’s share of spend will be in coming years, which is on-prem. And then of course, obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture. I know we all know Sapphire Rapids pushed into the future. When’s the next Intel release coming? Who knows? We think, you know, in 2023. There have been a lot of people standing by from a practitioner’s standpoint asking, well, what do I do between now and then? Does it make sense to upgrade bits and pieces of hardware or go from a last generation to a current generation when we know the next generation is coming?

And so I’ve been very, very focused on looking at how these connectivity components like RAID controllers and NICs. I know it’s not as sexy as talking about cloud but just how these components completely change the game and actually can justify movement from say a 14th-generation architecture to a 15th-generation architecture today, even though Gen16 is coming, let’s say, 12 months from now. So that’s where I am. Keep my phone number in the Rolodex. I literally reference Rolodex intentionally because like I said, I’m in there under the hood and it’s not as sexy.

Dave Vellante: Well, you know, to paraphrase it, maybe a derivative paraphrase of Larry Ellison’s rant on what is cloud? It’s operating systems and servers and storage and databases, et cetera. RAID controllers, NICs and all those supporting components live inside of clouds. All right. You know, one of the reasons I love working with you guys is because you all have such a wide observation space and Zeus Kerravala, you of all people have your fingers in a lot of pies. So give us your final thoughts.

Zeus Kerravala: Consumer experiences are coming to the enterprise

I’m not as propeller-heady as my chip counterparts here. (all laugh) So, you know, I look at the world a little differently and a lot of my research I’m doing now is the impact that distributed computing has on customer and employee experiences, right? You talk to every business and how the experiences they deliver to their customers is really differentiating how they go to market. And so they’re looking at these different ways of feeding up data and analytics and things like that in different places. And I think this is going to have a really profound impact on enterprise IT architecture. We’re putting more data, more compute in more places all the way down to little micro edges and retailers and things like that. And so we need the variety. Historically, if you think back to when I was in IT, you know, pre-Y2K, we didn’t have a lot of choice in things, right?

We had a server that was rack-mount or standup, right? And there wasn’t a whole lot of, you know, differences in choice. But today we can deploy these really high-performance compute systems on little blades inside servers or inside autonomous vehicles and things. I think the world from here… just the choice of what we have and the way hardware and software works together is really going to change the world the way we do things. We’re already seeing that, like I said, in the consumer world, right? There’s so many things you can do from a smartphone perspective – natural language processing, stuff like that. And it’s starting to hit businesses now. So just wait and watch the next five years.

Bob O’Donnell: 1) The combination of hardware and software coming together is critically important; 2) The diversity of silicon architectures and how software adapts to this new world

Look, it’s been a great conversation and I want to pick up a little bit on a comment Zeus made, which is this: It’s the combination of the hardware and the software and coming together, and the manner with which that needs to happen, I think is critically important. And the other thing is because of the diversity of the chip architectures and all those different pieces and elements, it’s going to be how software tools evolve to adapt to that new world. So I look at things like what Intel’s trying to do with oneAPI. You know, what Nvidia has done with CUDA. And what other platform companies are trying to do to create tools that allow them to leverage the hardware, but also embrace the variety of hardware that is there.

And so as those software development environments and software development tools evolve to take advantage of these new capabilities, that’s going to open up a lot of interesting opportunities that can leverage all these new chip architectures, that can leverage all these new interconnects, that can leverage all these new system architectures and figure out ways to make that all happen, is going to be critically important. And then finally, I’ll mention the research I’m actually currently working on is on private 5G and how companies are thinking about deploying private 5G and the potential for edge applications for that. So I’m doing a survey of several hundred us companies as we speak and really looking forward to getting that done in the next couple of weeks.

Keep in touch

Thanks all our panelists today. And thanks to Stephanie Chan, who researches topics for Breaking Analysis. Alex Myerson is on production, the podcasts and media workflows. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and Rob Hof, our editor in chief at SiliconANGLE.

Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com, DM @dvellante on Twitter and comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai.

Here’s the full video analysis:

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Image: Funtap/Adobe Stock

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU