It’s complicated: Examining our relationship with intelligent machines
Despite the growing use of artificial-intelligence tools on a global basis, there is no universal code of ethics to govern its use. Should there be one?
That’s a key question the technology industry is beginning to wrestle with as the use of AI generates results that aren’t always positive.
The technology has been used for positive outcomes in a number of areas, including improving Australia’s beaches, delivering reliable weather forecasts, and detecting human disease more accurately. But AI has also come under fire for injecting racial bias into criminal sentencing decisions and reinforcing gender discrimination. AI-powered facial recognition tools have been subjected to especially harsh criticism by privacy and human rights organizations.
In the black-and-white world of computer reasoning, there is an awful lot of gray.
“What is good use of AI, and what is bad use of AI?” asked Rajen Sheth (pictured, third from right), vice president of product management for Google Cloud AI. “There’s probably 10% of things that are really bad, and 10% of things that are really good, and 80% of things that are in that gray area where it’s up to your personal view. That’s the really tough part about all of this.”
Sheth spoke with John Furrier, host of SiliconANGLE Media’s mobile livestreaming studio theCUBE in Palo Alto, California, as part of an “Around theCUBE: Unpacking AI” panel discussion. He was joined by Kate Darling (second from right), a research specialist at MIT Media Lab, and Barry O’Sullivan (far right), professor and director of the Science Foundation Ireland Centre for Research Training in Artificial Intelligence at the University College Cork in Ireland.
They discussed the need for educating the public at large about AI, its potential impact on human jobs, the world’s evolving relationship with robots and attempts to create principles around the technology’s future use (see the full interview with transcript here).
Need for understanding
Change through technology has been a part of life for decades, even centuries. The invention of electric light, development of the telephone, and rise of television are just a few examples of the profound changes created by tech innovation over time.
“Technology has always changed how we work and play and interact with each other; just look at the smartphone,” O’Sullivan noted. “One of the big challenges we have is how to educate the ordinary person on the street to understand what AI is and what it’s capable of, when it can be trusted, and when it cannot be trusted.”
That issue of trust and public perception has centered around whether the rise of intelligent machines will mean loss of jobs. Darling, a leading expert in robot ethics, has devoted much of her research to anticipating the challenging questions raised by human-robot relationships.
“Will AI disrupt labor markets and change infrastructure’s inefficiencies?” Darling asked. “The answer to that is yes. But, will it be a one-to-one replacement of people? No.”
Robots as pets
While replacement of humans may not be in the immediate future, that does not preclude a growing relationship between humans and the machines that support them. Are we approaching a time when humans will manage robots, including household machines such as the Roomba, as pets?
“Yes, we are,” Darling said. “People will treat these technologies like they’re alive, even though they know that they’re just machines. People will name their Roomba vacuum cleaner and feel bad for it when it gets stuck.”
Concerns about the future impact of AI in society have led some companies to adopt guidelines or principles around the technology’s use. Google released its own AI principles in 2018, and these include accountability to people and the incorporation of privacy design features.
The company also stated it would not deploy technologies that cause overall harm, support weapons that cause harm, or gather information for surveillance in violation of international norms.
“We created a set of AI principles, and we codified what we think AI should do. And we codified areas that we would not go into as a company,” Sheth said. “What we now have is a process around how to take things that are coming in and figure out how to evaluate them.”
The panel participants all echoed a similar theme around the use of AI and technology in general. There is a real need to properly manage the innovative tide that is engulfing the world, and there’s nothing wrong with erring on the side of caution.
“We can’t crowdsource our sense of dignity,” O’Sullivan said. “We can’t have social media as the currency for how we value our lives or compare ourselves with others. We do have to be careful here.”
Here’s the complete discussion, one of many CUBE Conversations from SiliconANGLE and theCUBE. (* Note: Juniper Networks Inc. sponsored this segment of theCUBE. Neither Juniper nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.