UPDATED 22:49 EDT / OCTOBER 31 2016

NEWS

AI chip startup Graphcore lands $30M in new round

A U.K.-based startup that’s looking to build processors focused on artificial intelligence and machine learning workloads has just emerged from stealth with $30 million in funding.

Graphcore, based in Bristol, England, has spent the last two years quietly building a soon to be released “intelligent processing unit.” Revealing its plans on Monday, the company announced the Series A round from Samsung Catalyst Fund, Robert Bosch Venture Capital, Amadeus Capital Partners and C4 Ventures.

Graphcore’s aim is to build processors that can accelerate machine learning by 10 to 100 times that of current processors from companies the likes of Intel Corp. and Nvidia Corp. Besides that, the company also wants to make AI “more accessible” to businesses, developers and devices alike.

The mission “is to make machine learning faster, easier and more intelligent,” cofounder and Chief Executive Officer Nigel Toon wrote in a blog post. He explained that Graphcore’s technology will bring AI to low-power devices by reducing the cost of accelerating AI apps in the cloud. In addition, the company’s processors will help deep learning applications to evolve into what Toon calls “useful, general artificial intelligence.”

Other, more prestigious names in the tech world are also trying to build chips specifically for AI and machine learning tasks. Microsoft Corp., for example, is looking at using existing programmable chips, Google has designed its own chip, and Intel has bought several startups making chips and systems for AI applications. But specialized chips can be expensive and difficult to program.

“These are stopgaps not long term solutions,” Toon wrote. “Machine intelligence has a very different compute workload from anything that has come before and needs a new approach.”

Graphcore is developing what it calls “highly parallel processors” that feature software and libraries which are more flexible, faster and easier to use than Microsoft’s field-programmable gate arrays and other offerings. The company hopes to bring its first batch of chips to market by next year to power an “IPU Appliance” that should speed up cloud-based AI apps while reducing costs. Toon claims the new system will be able to accelerate training tasks in machine learning by up to 100 times current speeds.

Toon believes that there will be massive demand for the company’s chips in the next few years. He cites a recent report from Bank of America Merrill Lynch which says the AI industry will be worth over $70 billion a year by 2020, and a second study from Tractica LLC that shows spending on deep learning will reach $41.5 billion by 2024.

“We’re on the cusp of big breakthroughs in artificial intelligence,” Toon wrote. “The value to society of intelligent computing will be far greater than that of all computing so far. Our IPU system provides a less restrictive, more efficient and more powerful solution—making it easier and faster to produce applications, devices and machines that are much more intelligent and which can become more and more useful over time.”


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU