A team at Hewlett Packard Labs aims to build the future of computing by developing a new bridge for applications for photonics, memory, compute and hardware and software. They call it The Machine, and they hope to do nothing less than create the architecture of the future for computing.
The driving force behind The Machine is data. Hewlett Packard Enterprise Co. reports that by 2020, there will be 100 billion connected devices, according to Kirk Bresniker, Hewlett Packard Labs’ fellow and chief architect. The company also stated that in 2013, the world generated 4.5 zettabytes of data. HP Labs predicted that number will expand to 180 zettabytes by 2025. Based on these figures, The Machine is preparing for an increase of around 4,000 percent in data growth.
Contrary to those numbers, a 2015 Gartner report predicted only 21 billion IoT devices by 2020. And IHS Technology indicated that the IoT market will go from an installed base of 15.4 billion devices in 2015 to 30.7 billion devices in 2020 and 75.4 billion in 2025. Regardless of the industry debate as to what the number of connected devices will eventually exist, the reality is the infrastructure of today will not have the capacity to adapt to the accelerated pace and growth of data.
Bresniker spoke with Dave Vellante (@dvellante) and Paul Gillin (@pgillin), co-hosts of theCUBE, SiliconANGLE Media’s mobile live streaming studio, during the HPE Discover EU event about the evolution of the project and the dynamics involved in creating the architecture of the future. (*Disclosure below)
Bresniker is theCUBE’s Guest of the Week.
Laying the ground rules
HPE says the team has made significant progress in advancing from idea to prototype. “It’s all working. We have taken that original concept and turned them into our first working prototype,” Bresniker said.
The Machine composition includes system-on-chip microprocessors with memory fabrics connecting compute devices with memory devices and high-performance memories. Also, the team is developing photonic interconnects that scale hundreds of terabytes of fabric attached that can handle large-scale workloads.
While the project contains some incredibly sophisticated technology, the team’s focus and ground rules were to learn as much as possible as fast as possible with an aim at demonstrating the memory fabric, according to Bresniker. Once tested, the next step was to use Linux as an operating system to generate new categories of applications to highlight what it could do using real customer workloads that showed off its capabilities.
Building a bridge and an ecosystem
The Machine uses a partner-created microprocessor focused on building a bridge around it. ‘We took that microprocessor off the shelf, brought brand new width format for it and put it on the memory fabric … That adds the first element — computational scale — onto a fabric,” explained Bresniker.
The Machine is an open project that exposes developers to natural environments using Linux and Portable Operating System Interface APIs with programming languages such as C/C++ and Java, enabling rapid productivity in the memory on fabrics space.
Also, Hewlett Packard Enterprise is part of the Gen-Z Consortium, which is a new open industry group of like-minded companies that are developing an open interconnect standard. Bresniker said the company is working with these companies on open fabric interfaces to ignite innovation. Some of this group includes IBM Corp., Dell Technologies Inc. and Huawei as system vendors, as well as component vendors consisting of Advanced Micro Devices Inc., Western Digital Corp. and WD-owned SanDisk.
Characteristics of memory
“What really enables memory is having a set of graphical processing units doing high-performance floating-point computation, general processors shepherding and management of data resources all pointing at the same piece of memory so they can all update things simultaneously. And that’s were that team gets a break through, Bresniker illustrated. Enabling what he called dramatic performance increases on existing applications.
As “big data” becomes “huge data,” existing systems will need to change. So, what are the advantages to persistent memory? Bresniker pointed out that with persistent memory, customers have no worries about certain classes of failure, and the traditional stack can undergo a redesign by taking some software out, saving money and increasing performance.
Moreover, energy is also a big money saver because persistent memory allows the retention of data without using energy unless there are reading and writing the memory.
Bresniker further described how there are new categories for systems of records that drive huge volumes of data. Due to the expense, many do not record it or consider it mission critical.
“The kind of systems [in the future] we are going to generate enormous amounts of data, but they are also mission critical because they are going to run autonomous vehicle fleets, they are going to run intelligent power grids. We need them to be reliable, available and be able to analyze incredible volumes of data in near real time,” he said.
Therefore, many projects are dedicating resources to building the infrastructure of the future now.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of HPE Discover EU. (*Disclosure: HPE and other companies sponsor some HPE Discover EU segments on SiliconANGLE Media’s theCUBE. Neither HPE nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)