UPDATED 18:30 EDT / FEBRUARY 12 2019

AI

Q&A: The ethical AI questions companies should be asking

As artificial intelligence becomes one of the burgeoning technological advances of the 21st century, there are many troubling ethical questions surrounding the proper usage of AI for heavily invested companies. Dr. Rumman Chowdhury, global lead for responsible AI at Accenture Applied Intelligence, has made it her life’s work to talk to companies about the ethical and responsible usage of AI.

Chowdhury (pictured) spoke with Jeff Frick, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the recent Accenture Technology Vision event in San Francisco. They discussed the ethical questions around AI and how companies can think through AI in deeper ways for overall safety. Answers have been condensed for clarity.

I love that you introduce a lot of your talks with the fact that you’re not a technologist. You come at this from a very different point of view.

I do. I am a social scientist by background. I’ve been working as a data scientist in artificial intelligence for some years, but I’m not a computer scientist by trade. I come more from a stats background, which gives me a different perspective. So when I think of AI or data science, I literally think of it as information about people meant to understand trends in human behavior.

One issue is AI simply being a codification of existing biases, unless you really take a very proactive stance to make sure you’re not just codifying biases in software. What are you seeing?

Absolutely. We really have to think about two kinds of bias. There’s one that comes from our data, from our models. This can mean incomplete data, poorly trained models. But the second one to think about is you can have great data and a perfect model, but we come from an imperfect world. We know that the world is not a fair place. Some people just get a poor lot in life. We don’t want to codify that into our systems and processes, so as we think about ethics and AI, it’s not just about improving the technology; it’s about improving the society behind the technology.

Regarding the complaints about big tech’s use of AI, where everyone is doing their little piece, what happens over time is those get rolled into bigger pieces that weren’t necessarily what they were starting with in the first place.

Absolutely. It’s something I call “moral outsourcing.” Because we feel like a cog in a machine. We feel sometimes as technologists that people aren’t willing to take the responsibility for their actions, even though we should be. If we build something that is fundamentally unethical, we need to stop and ask ourselves, “Just because we can doesn’t mean we should.”

And think about the implications on society. Right now, there’s often not enough accountability, because everybody feels like they’re contributing to this larger machine — “Who am I to question it?” and “The system will crush me anyway.” We need to empower people to be able to speak their minds and have an ethical conscience.

How receptive are companies to your message? Do they get it?

I’ll give you a phrase that everybody understands, and then they get the point of ethics in AI. “Brakes help a car go faster.” If we have the right kinds of guard rails, warning mechanisms, systems to tell us if something is going to derail or get out of control, we feel more comfortable taking risks. So think about driving on the freeway. Because you know you can stop your car if the car in front of you stops abruptly, you feel comfortable driving 90 miles an hour. If you could not stop your car, nobody would go faster than 15.

I actually think ethics and AI are an ethical implementation of technology as a way of helping companies be more innovative. It sounds contradictory, but it actually works very well. If I know where my safe space is, I’m more capable of making true innovations.

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Accenture Technology Vision 2019:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.