UPDATED 09:00 EDT / AUGUST 13 2019

AI

Are humans and machines on a collision course? A futurist examines our uneasy dance with automation

The next time a McDonald’s Corp. customer is asked if they want fries with their order, the questioner may not be a real person.

The fast-food giant has been testing voice-recognition software at one of its locations in Chicago. And those fries may soon be robot-cooked as well. McDonald’s is piloting the use of robots to toss menu items into deep fat fryers and serve them up.

Automation is more than a fringe trend in the restaurant industry. Domino’s Pizza Inc. has been testing voice recognition software for phone orders; there’s now a mini-café in Austin, Texas, that makes coffee to order entirely using robots; and Silicon Valley-based Chowbotics Inc. has created a vending machine that can make salads and other bowl-style meals.

As rapid advances in artificial intelligence and machine learning have propelled new technologies into the mainstream, a number of academic researchers are beginning to examine the societal and economic impacts of growing automation. That has led one professor to call for enterprises at the forefront of much of this work to ensure humans aren’t just dropped from the equation.

“It’s time for companies to start stepping up and developing a better combination of humans and machines,” said Tom Davenport (pictured), author of a recently published book “The AI Advantage” and distinguished professor at Babson College. “My argument has always been you can handle innovation better, you can avoid a race to the bottom that automation sometimes leads to if you think creatively about humans and machines working as colleagues.”

Davenport spoke with Dave Vellante and Paul Gillin, co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the MIT CDOIQ Symposium in Cambridge, Massachusetts. They discussed the potential to improve AI development through synthetic data, the impact of smart machines in fields like medicine and retail, concerns around blockchain deployment in the financial world and the need to strive for advances that have real impact (see the full interview with transcript here).

This week, theCUBE features Tom Davenport as its Guest of the Week.

Need for synthetic images

In many of the current use cases, robots are being deployed because they can identify what a French fry or a salad is supposed to look like. Progress in image recognition has enabled the use of machines to handle tasks previously performed by humans.

The problem has been that a robot needs a lot of labeled data to become fully cognizant of the world around it and without enough of it, learning is stymied. There are more than enough cat pictures on the internet to train machines to recognize them. But what about snow or rain in headlamps to facilitate autonomous driving at night on a curvy mountain road?

Davenport believes that the solution for moving machines dramatically forward will involve the use of synthetic images.

“That’s one of the reasons we can’t use autonomous vehicles yet because images differ in the rain and snow,” Davenport said. “We’re going to have to have synthetic snow and synthetic rain to identify those images so the GPU chip still realizes that’s a pedestrian walking across the street.”

Use case for breast cancer

There is already plenty of research in the creation of synthetic images. Earlier this month, researchers at MIT announced an approach using AI to train robots to knit garments using synthetic images as the source of a finely focused visual learning model.

Medical researchers have begun using synthetic images to train machine learning algorithms in the detection of various forms of breast cancer. And synthetically generated images are being used to develop software tools that can recognize the use of chemical weapons canisters against civilians.

“Right now, just a little bit of variation in the image can throw off the recognition all together,” Davenport said. “The ability to start generating images via synthetic-labeled data could really make a big difference in how rapidly image recognition works.”

Amazon’s cashierless model

One of the companies that has been pioneering the use of synthetic data in its operations is Amazon.com Inc. Last year, the firm launched Amazon Go, a cashierless convenience store in Seattle, and the concept has since expanded to San Francisco, New York and Chicago. Shoppers enter, select what they want, and walk right out the front door.

It’s all facilitated by computer vision, sensor fusion, and deep learning that handles payment and tracks inventory. Amazon’s heavy dependence on computer vision in a cashierless environment required it to use synthetic data to train the system to handle things like sunlight streaming into the store on a busy day.

In research released just weeks ago, Amazon found that synthetic datasets allowed it to train machine-learning models on errors or negative examples that made its systems dramatically smarter.

“Amazon Go is a really interesting experiment,” Davenport noted. “What people are saying will disappear next is the human at the point of sale.”

Blockchain not ready

While retail represents an area where rapid advances are being made in the use of AI to transform the industry, the blockchain is another matter. There are some use cases just starting to emerge, but the deployment of AI in the decentralized transactions space offers significant challenges.

In June, the public blockchain Cortex announced that it had introduced AI into a crypto network for the first time at scale. The purpose was to generate credit reports and facilitate anti-fraud reporting, much as the traditional banking world has used AI in the past.

However, in the notoriously risk-averse financial industry, there is also concern that the blockchain is fully safe. Earlier this year the crypto exchange Coinbase Inc. fought off an attack where hackers rewrote transaction histories, and a second exchange — Gate Technology Inc.’s Gate.io — suffered a $200,000 theft, which, in a remarkable turn of events, got returned by the cyberthief.

Users on a Japanese cryptocurrency exchange weren’t so lucky. They lost $32 million in virtual money last month.

“In principle, blockchain is more secure because it’s spread across a lot of different ledgers, but people keep hacking into bitcoin, so it makes you wonder,” Davenport said. “I think blockchain is going to take longer than we thought as well.”

Last year, Davenport was named one of the “Top Ten Voices in Technology” by LinkedIn. His perspective has been shaped by years of research and lectures on analytics and big-data movements over a lengthy career.

What he has seen evolve in the AI landscape shows promise in areas like flipping hamburgers, getting a quart of milk at the local store or processing back-office invoices. What’s still missing is the really Big Score.

“All of those things are quite feasible; they’re just not that exciting,” Davenport said. “What we’re not seeing are curing cancer, creating fully autonomous vehicles, the really aggressive moonshots.”

Here’s the complete video interview below, part of SiliconANGLE’s and theCUBE’s coverage of the MIT CDOIQ Symposium:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU