UPDATED 15:05 EST / NOVEMBER 07 2019

AI

It’s complicated: Examining our relationship with intelligent machines

Despite the growing use of artificial-intelligence tools on a global basis, there is no universal code of ethics to govern its use. Should there be one?

That’s a key question the technology industry is beginning to wrestle with as the use of AI generates results that aren’t always positive.

The technology has been used for positive outcomes in a number of areas, including improving Australia’s beaches, delivering reliable weather forecasts, and detecting human disease more accurately. But AI has also come under fire for injecting racial bias into criminal sentencing decisions and reinforcing gender discrimination. AI-powered facial recognition tools have been subjected to especially harsh criticism by privacy and human rights organizations.

In the black-and-white world of computer reasoning, there is an awful lot of gray.

“What is good use of AI, and what is bad use of AI?” asked Rajen Sheth (pictured, third from right), vice president of product management for Google Cloud AI. “There’s probably 10% of things that are really bad, and 10% of things that are really good, and 80% of things that are in that gray area where it’s up to your personal view. That’s the really tough part about all of this.”

Sheth spoke with John Furrier, host of SiliconANGLE Media’s mobile livestreaming studio theCUBE in Palo Alto, California, as part of an “Around theCUBE: Unpacking AI” panel discussion. He was joined by Kate Darling (second from right), a research specialist at MIT Media Lab, and Barry O’Sullivan (far right), professor and director of the Science Foundation Ireland Centre for Research Training in Artificial Intelligence at the University College Cork in Ireland.

They discussed the need for educating the public at large about AI, its potential impact on human jobs, the world’s evolving relationship with robots and attempts to create principles around the technology’s future use (see the full interview with transcript here).

Need for understanding

Change through technology has been a part of life for decades, even centuries. The invention of electric light, development of the telephone, and rise of television are just a few examples of the profound changes created by tech innovation over time.

“Technology has always changed how we work and play and interact with each other; just look at the smartphone,” O’Sullivan noted. “One of the big challenges we have is how to educate the ordinary person on the street to understand what AI is and what it’s capable of, when it can be trusted, and when it cannot be trusted.”

That issue of trust and public perception has centered around whether the rise of intelligent machines will mean loss of jobs. Darling, a leading expert in robot ethics, has devoted much of her research to anticipating the challenging questions raised by human-robot relationships.

“Will AI disrupt labor markets and change infrastructure’s inefficiencies?” Darling asked. “The answer to that is yes. But, will it be a one-to-one replacement of people? No.”

Robots as pets

While replacement of humans may not be in the immediate future, that does not preclude a growing relationship between humans and the machines that support them. Are we approaching a time when humans will manage robots, including household machines such as the Roomba, as pets?

“Yes, we are,” Darling said. “People will treat these technologies like they’re alive, even though they know that they’re just machines. People will name their Roomba vacuum cleaner and feel bad for it when it gets stuck.”

Concerns about the future impact of AI in society have led some companies to adopt guidelines or principles around the technology’s use. Google released its own AI principles in 2018, and these include accountability to people and the incorporation of privacy design features.

The company also stated it would not deploy technologies that cause overall harm, support weapons that cause harm, or gather information for surveillance in violation of international norms.

“We created a set of AI principles, and we codified what we think AI should do. And we codified areas that we would not go into as a company,” Sheth said. “What we now have is a process around how to take things that are coming in and figure out how to evaluate them.”

The panel participants all echoed a similar theme around the use of AI and technology in general. There is a real need to properly manage the innovative tide that is engulfing the world, and there’s nothing wrong with erring on the side of caution.

“We can’t crowdsource our sense of dignity,” O’Sullivan said. “We can’t have social media as the currency for how we value our lives or compare ourselves with others. We do have to be careful here.”

Here’s the complete discussion, one of many CUBE Conversations from SiliconANGLE and theCUBE. (* Note: Juniper Networks Inc. sponsored this segment of theCUBE. Neither Juniper nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Image: Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU