UPDATED 11:56 EDT / APRIL 10 2021

A new era of innovation: Moore’s Law is not dead and AI is ready to explode

Moore’s Law is dead, right? Think again.

Although the historical annual improvement of about 40% in central processing unit performance is slowing, the combination of CPUs packaged with alternative processors is improving at a rate of more than 100% per annum. These unprecedented and massive improvements in processing power combined with data and artificial intelligence will completely change the way we think about designing hardware, writing software and applying technology to businesses.

Every industry will be disrupted. You hear that all the time. Well, it’s absolutely true and we’re going to explain why and what it all means.

In this Breaking Analysis, we’re going to unveil some data that suggests we’re entering a new era of innovation where inexpensive processing capabilities will power an explosion of machine intelligence applications. We’ll also tell you what new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade.

Is Moore’s Law really dead?

We’ve heard it hundreds of times in the past decade. EE Times has written about it, MIT Technology Review, CNET, SiliconANGLE and even industry associations that marched to the cadence of Moore’s Law. But our friend and colleague Patrick Moorhead got it right when he said:

Moore’s Law, by the strictest definition of doubling chip densities every two years, isn’t happening anymore.

And that’s true. He’s absolutely correct. However, he couched that statement saying “by the strictest definition” for a reason… because he’s smart enough to know that the chip industry are masters at figuring out workarounds.

Historical performance curves are being shattered

The graphic below is proof that the death of Moore’s Law by its strictest definition is irrelevant.

The fact is that the historical outcome of Moore’s Law is actually accelerating, quite dramatically. This graphic digs into the progression of Apple Inc.’s system-on-chip developments from the A9 and culminating in the A14 five-nanometer Bionic system on a chip.

The vertical axis shows operations per second and and the horizontal axis shows time for three processor types. The CPU, measured in terahertz (the blue line which you can hardly see); the graphics processing unit or GPU, measured in trillions of floating point operations per second (orange); and the neural processing unit or NPU, measured in trillions of operations per second (the exploding gray area).

Many folks will remember that historically, we rushed out to buy the latest and greatest personal computer because the newer models had faster cycle times, that is, more gigahertz. The outcome of Moore’s Law was that performance would double every 24 months or about 40% annually. CPU performance improvements have now slowed to roughly 30% annually, so technically speaking, Moore’s Law is dead.

Apple’s SoC performance shatters the norm

Combined, the improvements in Apple’s SoC since 2015 have been on a pace that’s higher than 118% annual improvement. Actually it’s higher because 118% is the actual figure for these three processor types shown above. In the graphic, we’re not even counting the impact of the digital signal processors and accelerator components of the system, which would push this higher.

Apple’s A14 shown above on the right is quite amazing with its 64-bit architecture, multiple cores and alternative processor types. But the important thing is what you can do with all this processing power – in an iPhone! The types of AI continue to evolve from facial recognition to speech and natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand.

Quite incredible.

Processing goes to the edge – networks and storage become the bottlenecks

We recently reported Microsoft Corp. Chief Executive Satya Nadella’s epic quote that we’ve reached peak centralization. The graphic below paints a picture that is telling. We just shared above that processing power is accelerating at unprecedented rates. And costs are dropping like a rock. Apple’s A14 costs the company $50 per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators that will optimize energy use and save 10% annually on power consumption. They said that chip will cost $1 — a buck to shave 10% off your electricity bill from the fridge.

Processing is plentiful and cheap. But look at where the expensive bottlenecks are: networks and storage. So what does this mean?

It means that processing is going to get pushed to the edge – wherever the data is born. Storage and networking will become increasingly distributed and decentralized. With custom silicon and processing power placed throughout the system with AI embedded to optimize workloads for latency, performance, bandwidth, security and other dimensions of value.

And remember, most of the data – 99% – will stay at the edge. We like to use Tesla Inc. as an example. The vast majority of data a Tesla car creates will never go back to the cloud. It doesn’t even get persisted. Tesla saves perhaps five minutes of data. But some data will connect occasionally back to the cloud to train AI models – we’ll come back to that.

But this picture above says if you’re a hardware company, you’d better start thinking about how to take advantage of that blue line, the explosion of processing power. Dell Technologies Inc., Hewlett Packard Enterprise Co., Pure Storage Inc., NetApp Inc. and the like are either going to start designing custom silicon or they’re going to be disrupted, in our view. Amazon Web Services Inc., Google LLC and Microsoft are all doing it for a reason, as are Cisco Systems Inc. and IBM Corp.. As cloud consultant Sarbjeet Johal has said, “this is not your grandfather’s semiconductor business.”

And if you’re a software engineer, you’re going to be writing applications that take advantage of of all the data being collected and bringing to bear this immense processing power to create new capabilities like we’ve never seen before.

AI everywhere

Massive increases in processing power and cheap silicon will power the next wave of AI, machine intelligence, machine learning and deep learning.

We sometimes use artificial intelligence and machine intelligence interchangeably. This notion comes from our collaborations with author David Moschella. Interestingly, in his book “Seeing Digital,” Moschella says “there’s nothing artificial” about this:

There’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor.

It’s a nuance, but precise language can often bring clarity. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get “smarter” – make better models, for example, that can lead to augmented intelligence and better decisions by humans, or machines. These models improve as they get more data and iterate over time.

Deep learning is a more advanced type of machine learning that uses more complex math.

The right side of the chart above shows the two broad elements of AI. The point we want to make here is that much of the activity in AI today is focused on building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years.

AI inference unlocks huge value

Inference is the deployment of the model, taking real-time data from sensors, processing data locally, applying the training that has been developed in the cloud and making micro-adjustments in real time.

Let’s take an example. We love car examples and observing Tesla is instructive and a good model as to how the edge may evolve. So think about an algorithm that optimizes the performance and safety of a car on a turn. The model takes inputs with data on friction, road conditions, angles of the tires, tire wear, tire pressure and the like. And the model builders keep testing and adding data and iterating the model until it’s ready to be deployed.

Then the intelligence from this model goes into an inference engine, which is a chip running software, that goes into a car and gets data from sensors and makes micro adjustments in real time on steering and braking and the like. Now as we said before, Tesla persists the data for a very short period of time because there’s so much data. But it can choose to store certain data selectively if needed to send back to the cloud and further train the model. For example, if an animal runs into the road during slick conditions, maybe Tesla persists that data snapshot, sends it back to the cloud, combines it with other data and further perfects the model to improve safety.

This is just one example of thousands of AI inference use cases that will further develop in the coming decade.

AI value shifts from modeling to inferencing

This conceptual chart below shows percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these apps will mature over time as inference becomes more mainstream. The opportunities for AI inference at the edge and in the “internet of things” are enormous.

 

Modeling will continue to be important. Today’s prevalent modeling workloads in fraud, adtech, weather, pricing, recommendation engines and more will just keep getting better and better. But inference, we think, is where the rubber meets the road, as shown in the previous example.

And in the middle of the graphic we show the industries, which will all be transformed by these trends.

One other point on that: Moschella in his book explains why historically, vertical industries remained pretty stovepiped from each other. They each had their own “stack” of production, supply, logistics, sales, marketing, service, fulfillment and the like. And expertise tended to reside and stay within that industry and companies, for the most part, stuck to their respective swim lanes.

But today we see so many examples of tech giants entering other industries. Amazon entering grocery, media and healthcare, Apple in finance and EV, Tesla eyeing insurance: There are many examples of tech giants crossing traditional industry boundaries and the enabler is data. Auto manufacturers over time will have better data than insurance companies for example. DeFi or decentralized finance or platforms using the blockchain will continue to improve with AI and disrupt traditional payment systems — and on and on.

Hence we believe the oft-repeated bromide that no industry is safe from disruption.

Snapshot of AI in the enterprise

Last week we showed you the chart below from Enterprise Technology Research.

This is data shows on the vertical axis Net Score or spending momentum. The horizontal axis is Market Share or pervasiveness in the ETR data set. The red line at 40% is our subjective anchor; anything about 40% is really good in our view.

Machine learning and AI are the No. 1 area of spending velocity and has been for a while, hence the four stars. Robotic process automation is increasingly an adjacency to AI and you could argue cloud is where all the machine learning action is taking place today and is another adjacency, although we think AI continues to move out of the cloud for the reasons we just described.

Enterprise AI specialists carve out positions

The chart below shows some of the vendors in the space that are gaining traction. These are the companies chief information officers and information technology buyers associate with their AI/ML spend.

This graph above uses the same Y/X coordinates – Spending Velocity on the vertical by Market Share on the horizontal axis, same 40% red line.

The big cloud players, Microsoft, AWS and Google, dominate AI and ML with the most presence. They have the tooling and the data. As we said, lots of modeling is going on in the cloud, but this will be pushed into remote AI inference engines that will have massive processing capabilities collectively. We are moving away from peak centralization and this presents great opportunities to create value and apply AI to industry.

Databricks Inc. is seen as an AI leader and stands out with a strong Net Score and a prominent Market Share. SparkCognition Inc. is off the charts in the upper left with an extremely high Net Score albeit from a small sample. The company applies machine learning to massive data sets. DataRobot Inc. does automated AI – they’re super high on the Y axis. Dataiku Inc. helps create machine learning-based apps. C3.ai Inc. is an enterprise AI company founded and run by Tom Siebel. You see SAP SE, Salesforce.com Inc. and IBM Watson just at the 40% line. Oracle is also in the mix with its autonomous database capabilities and Adobe Inc. shows as well.

The point is that these software companies are all embedding AI into their offerings. And incumbent companies that are trying not to get disrupted can buy AI from software companies. They don’t have to build it themselves. The hard part is how and where to apply AI. And the simple answer is: Follow the data.

Key takeaways

There’s so much more to this story, but let’s leave it there for now and summarize.

We’ve been pounding the table about the post-x86 era, the importance of volume in terms of lowering the costs of semiconductor production, and today we’ve quantified something that we haven’t really seen much of and that’s the actual performance improvements we’re seeing in processing today. Forget Moore’s Law being dead – that’s irrelevant. The original premise is being blown away this decade by SoC and the coming system on package designs. Who knows with quantum computing what the future holds in terms of performance increases.

These trends are a fundamental enabler of AI applications and as is most often the case, the innovation is coming from consumer use cases; Apple continues to lead the way. Apple’s integrated hardware and software approach will increasingly move to the enterprise mindset. Clearly the cloud vendors are moving in that direction. You see it with Oracle Corp. too. It just makes sense that optimizing hardware and software together will gain momentum because there’s so much opportunity for customization in chips as we discussed last week with Arm Ltd.’s announcement – and it’s the direction new CEO Pat Gelsinger is taking Intel Corp.

One aside – Gelsinger may face massive challenges with Intel, but he’s right on that semiconductor demand is increasing and there’s no end in sight.

If you’re an enterprise, you should not stress about inventing AI. Rather, your focus should be on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You’ll buy, not build AI.

Data, as John Furrier has said many times, is becoming the new development kit. He said that 10 years ago and it’s more true now than ever before:

Data is the new development kit.

If you’re an enterprise hardware player, you will be designing your own chips and writing more software to exploit AI. You’ll be embedding custom silicon and AI throughout your product portfolio and you’ll be increasingly bringing compute to data. Data will mostly stay where it’s created. Systems, storage and networking stacks are all being disrupted.

If you developer software, you now have processing capabilities in the palm of your hands that are incredible and you’re going to write new applications to take advantage of this and use AI to change the world. You’ll have to figure out how to get access to the most relevant data, secure your platforms and innovate.

And finally, if you’re a services company you have opportunities to help companies trying not to be disrupted. These are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive.

Privacy? AI for good? Those are whole topics on their own, extensively covered by journalists. We think for now it’s prudent to gain a better understanding of how far AI can go before we determine how far it should go and how it should be regulated. Protecting our personal data and privacy should be something that we most definitely care for – but generally we’d rather not stifle innovation at this point.

Keep in touch

Remember these episodes are all available as podcasts wherever you listen. Email david.vellante@siliconangle.com, DM @dvellante on Twitter and comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon/SiliconANGLE.  If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai.

Here’s the full video analysis:

Image: BuffaloBoy

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU