UPDATED 16:20 EDT / AUGUST 01 2019

AI

If your self-driving car had to choose whom to hit: Ethics questions stump AI

The public is caught in a messy, love/hate relationship with big data and artificial intelligence. Exciting new technologies and data breeches in the news weigh heavy on both sides of the scale. As hard questions about ethics and the autonomy of software become more pressing, consensus remains elusive.

“The trouble with ethic issues is they don’t tend to have a nice clean answer,” said Stuart Madnick (pictured), professor of engineering systems at Massachusetts Institute of Technology. That’s because there are few if any roses without thorns in AI, nor potential pitfalls without payoffs. We all love the smart algorithms in applications that anticipate our next question. They require massive data sets to train. And that’s just fine by me, the average consumer reckons, as long as it’s not my data.

Madnick spoke with Dave Vellante and Paul Gillin, co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the MIT CDOIQ Symposium in Cambridge. They discussed the hard ethical questions facing makers and consumers of AI (see the full interview with transcript here).

The worst AI except for all the others

The fact is that there is no big data or AI without a possible compromise in data privacy or sovereignty, according to Madnick. We’re still inexperienced in judging an acceptable ratio of cost to benefit.

“Almost every study we’ve done that has these kinds of [ethics] issues on it, and we had people vote, almost always it’s spread across the board, because any one of these is a bad decision,” Madnick said. “So which bad decision is the least bad?”

Besides privacy issues, questions about autonomous AI are getting hairier with the approach self-driving cars, Madnick pointed out. At MIT, Madnick teaches students about technology ethics. The subject of autonomous driving raises some of the difficult questions.

For example, utility theory states that if a car must hit a person or people, it’s preferable that it hit the smallest number of people. Take a scenario with a driver in an autonomous vehicle facing a possible crash. The car may be programmed to crash into a wall, possibly killing the driver, a woman crossing the street with a baby carriage, or three men in a group. The first choice would be to kill the driver of the car, the second would be the woman and baby, and the last would be the three men.

The class is typically not 100% happy with this arrangement, Madnick explained. Clearly, there are still a lot of ethical kinks to work out in AI.

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the MIT CDOIQ Symposium:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU