UPDATED 08:00 EST / JULY 21 2016

NEWS

Wave Computing rolls out plans for fast deep-learning computers

Deep learning neural networks, software programs that attempt to emulate the way the brain’s neocortex works, have led to striking improvements recently in speech and image recognition and even game playing. But progress in this branch of machine learning has come in spite of the fact that today’s computers aren’t really built with the kind of massively parallel processing that deep learning requires.

Today, a company called Wave Computing is announcing plans to sell just such a computer starting next year. In stealth development for six years, Wave said it’s developing a line of deep learning computers based on a new processing architecture it calls the Wave Dataflow Processing Unit (DPU).

Wave Computing CEO Derek Meyer (Photo: Wave Computing)

Wave Computing CEO Derek Meyer (Photo: Wave Computing)

Derek Meyer, chief executive of the Campbell, CA-based company, said its computers will be able to train deep learning models and run them between 10 and 100 times faster than the conventional architectures based either on central processing unit (CPU) chips from Intel Corp. and other companies or on graphics processing unit (GPU) chips from Nvidia Corp. The latter recently have become the standard for deep learning networks, though other companies are also embedding deep learning into their existing chips.

Wave’s computers are one sign that deep learning has hit the mainstream, decades after it was first conceived. “The deep learning market is just getting to the point to justify the investment in silicon,” said Linley Gwennap, principal analyst at the market research firm The Linley Group Inc. “What Wave is doing is going to be a catalyst.”

Indeed, Wave Computing already isn’t alone in its quest to create new kinds of computers aimed specifically at speeding up deep learning. In May, Google Inc. announced it has been using a custom chip it calls a Tensor Processing Unit, whose basic idea appears similar to Wave Computing’s, in some of its data centers for more than a year. The startup Nervana Systems Inc. announced a cloud deep learning service in February and has been working on a deep learning chip and appliance due out next year as well. Minds.ai also is believed to be working on a deep learning chip.

They’re all examples of a nascent departure from traditional computers using the so-called von Neumann architecture, which requires data to be shuttled back and forth for each computing operation. That can create a bottleneck especially for Big Data applications that are finding their way into every nook and cranny of businesses with the proliferation of data from social networks, search, smartphone sensors and more. GPUs lessen the problem by doing a lot of operations in parallel, but they require a lot of power.

Wave says its “dataflow” approach overcomes that issue, making its computers up to 20 times faster than an Nvidia DGX-1 deep learning system. It does this with a chip that has 16,000 parallel processing elements, four of which can in turn be connected. They all share very large amounts of nearby memory and wide data pipelines to external memory so data can flow in and out of the processors much faster.

Architecture of a Data Processing Unit (Image: Wave Computing)

Architecture of a Dataflow Processing Unit module (Image: Wave Computing)

As Wave explains it, dataflow is akin to a modern auto assembly line, where each station installs one component of the car, one after another in quick succession — in contrast to traditional “control flow” processors, which are more like the old way of building each entire car individually.

“Dataflow is a concept that has been around for a long time,” said Gwennap. “But Big Data problems are very well-suited for this approach.”

Wave’s computers run Google’s TensorFlow machine learning software, which the search giant recently released as open source software. They also can run Microsoft’s open-source Computational Network Toolkit (CNTK) software. That means that programmers already using these software libraries should be able to use the new computers without a big learning curve. “They are dataflow algorithms at their heart,” Meyer said.

Wave said it expects to provide early access to the computers to hardware platform developers and some customers later this year, with general availability next year. The machines will cost from tens to hundreds of thousands of dollars.

However, delivering new computer and semiconductor architectures is challenging, and the industry is littered with companies such as Trilogy Systems and HAL Computer Systems that failed to meet their revolutionary promises. More recently, Hewlett-Packard Enterprise last year scaled back its ambitions on The Machine, which had been anticipated to use a novel memory component called a memristor, though it expects to show off a prototype using conventional dynamic random-access memory (DRAM) later this year.

Wave, which employs 120 people in the U.S., India, Sri Lanka and Armenia, has raised a total of $53 million from Tallwood Venture Capital, Southern Cross Venture Partners and an engineering contract.

Photo from Pixabay


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU