Harry Potter computing can’t make hardware go ‘poof’ yet
Just because lots of businesses have chucked their data center hardware and moved to cloud infrastructure as a service, doesn’t mean hardware has dissolved in a black hole. Cloud providers hyper-optimize hardware that can make or break the software and applications found further up the stack. So how far should customers jut their noses into cloud providers’ hardware business? And what does this mean for the steady tumbling of workloads to “internet of things” devices connected at the edge of the networks?
“I, for one, continue to hope that we’re going to see the Harry Potter computing model show up at some point in time, but until then, magic is not going to run software,” said Peter Burris (@plburris, pictured), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and chief research officer and general manager of Wikibon research.
“It’s going to have to run on hardware, and that has physical and other realities,” Burris said in a recent discussion with the Wikibon research team at theCUBE’s studio in Palo Alto, California.
Just as cloud and as-a-service models are abstracting away hardware complexities from end users, a number of trends are prompting manufacturers to make hardware itself more complex and specialized. Although customers may rent cloud infrastructure like a commodity or utility, its versatility owes much to hardware that is anything but.
Public cloud’s hidden hardware
“When we first saw cloud computing roll out, many people thought that this was just undifferentiated commodity equipment,” said Stu Miniman (@stu), researcher and analyst at Wikibon. In reality, hyperscale cloud providers, such as Amazon Web Services Inc., tweak hardware configurations for various application types and scale them to tens of thousands of nodes. In times past, enterprise information technology teams hand wrought their own stacks, but now they can easily pick and pay for infrastructure as a service, freeing them to focus on higher-up apps and services. Nevertheless, “Hardware absolutely matters,” Miniman said. Enterprises will have to be cognizant of how much infrastructure lays behind their providers’ curtain and whether their ignorance is bliss or bane.
Systems integration is important, Miniman added. “The enterprise should not be worried about taking all of the pieces and putting them together; they should be able to buy solutions, leverage platforms that take care of that environment,” he said. In the event of a problem, enterprises will have to be wise to who is managing the hardware to make sure they are fixing it.
The world received an abrupt awakening from the Harry Potter computing dream early this month. The Meltdown and Spectre security vulnerabilities discovered in Intel, ARM and AMD CPUs could not be waved away with a software wand. The patches that operating system developers Microsoft, Apple and Linux Group provided can slow down processing by anywhere from five to 30 percent, according to analysts. They may also cause freak restarts and other problems. Thus far, kinks are still tight and work upon them frustrated.
Linux creator Linus Torvalds didn’t mince words about Intel’s patches for the Linux kernel: “All of this is pure garbage,” he wrote on public message board. “The patches are COMPLETE AND UTTER GARBAGE. … They do things that do not make sense.” (Editor’s note: Capitals in original text.)
Clearly, faulty hardware can still cause headaches incurable by software alone.
Living, dying and inferring on the edge
Another growing market demanding taught hardware solutions is that of IoT edge devices. IoT edge platform revenue will shoot up 81 percent in 2018, according to MachNation research. Interestingly, the report emphasizes the importance of software at the edge. “Our research shows that roughly 90 percent of edge complexity is software related,” said Dima Tokar, co-founder and chief technology officer of MachNation.
Close examination of artificial intelligence inferencing at the edge suggests, however, that software may not go far without specialized hardware. “Those devices need to be programmed very, very intently for what is happening there,” said Wikibon researcher David Floyer (@dfloyer). Level-one edge devices will perform crucial preliminary data reduction and decision-making before sending the remainder of data up to level two of an IoT system.
Floyer has extensively researched data life cycles at the edge and concluded in a Wikibon report that 95 percent of IoT data will live and die at the edge, and this will grow to 99 percent over the next decade. Both latency and sheer cost are determining factors.
“Edge computing is nothing without hardware on the edge — devices as well as hubs and gateways and so forth to offload and to handle much of the processing needed,” said Wikibon researcher James Kobielus (@jameskobielus).
IoT the hard way
Improved chipsets for hardware at the edge are hitting the market. For example, Nividia Corp.’s graphics processing units were not typically optimized for AI in the past. They are now, however, incorporating densely packed Tensor Core processing components to handle deep learning neural networks rapidly for AI inferring and training. Google LLC’s Tensor Processing Unit is another Tensor Core processing architecture showing big promise in ecosystems for AI edge computing. Field programmable gate arrays and application-specific integrated circuits are two other chip types in the running for specialized IoT use cases.
FPGAs in particular have come a long way in recent years. “I always laughed when people said FPGA, because it should have been called FGA, because there was no end user computing of an FPGA,” said Wikibon analyst Neil Raden (@NeilRaden). It’s quite a different story today.
“It gives you that future-proofing, that capability to sustain different typologies, different architectures, different previsions to kind of keep people going with the same piece of hardware without having to say, ‘Spin up a new ASIC,’” Bill Jenkins, product line manager of AI for field programmable gate arrays at Intel, told theCUBE last December during the Supercomputing event in Denver, Colorado.
Vendors combining FPGAs, GPUs and similar hardware in appliances for edge computing might offer specialization options on-premises. “… It’s a system on a chip that’s got transistor real estate for specialized functions, and because it’s not running the same scalable clustered software that you find in the cloud, you have small-footprint software that’s highly verticalized or specialized,” said Wikibon analyst George Gilbert (@ggilbert41).
They will, however, have to balance the complexity with high-touch service, Floyer explained. “It will be their responsibility as the deliverer of a solution to put that together and to make sure it works and that it can be serviced,” he concluded.
Watch the complete video interview below.
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU