UPDATED 17:30 EDT / DECEMBER 28 2017

img_0045 NEWS

When AI goes rogue: Moral debates could kill the hype

Venture capitalists lavished $10.8 billion on artificial intelligence and machine learning technology companies in 2017, according to PitchBook Data Inc. They’ve placed major bets that AI innovation can’t go far or fast enough to meet demand. But controversial use cases — like when algorithms decide the fate of the criminally tried — and the danger of coded-in bias suggest it’s gone too far already without regulatory oversight.

“This technology’s coming at us so fast, we don’t have all the policies figured out,” said Beena Ammanath (pictured), global vice president of big data, artificial intelligence and new tech innovation at Hewlett Packard Enterprise Co.

While consumer, business and government users embrace AI software that makes their jobs and lives simpler, they are simultaneously tasked with building the guardrails around the tech. “And that is kind of causing that fear of friction,” Ammanath said in an interview during the HPE Discover EU event in Madrid, Spain. She spoke with Dave Vellante (@dvellante) and Peter Burris (@plburris), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. (* Disclosure below.)

This week, theCUBE spotlights Beena Ammanath in our Women in Tech feature.

“The fear exists because there is so much unknown,” Ammanath said. Recent advances in compute availability, cheap storage and massive quantities of data have shot artificial intelligence to the top tier of tech trends. However, not long ago, few believed that AI would ever make it out of science fiction — at least not in their lifetimes. In her own undergrad and post-grad years at university, Ammanath recalls that no one wanted to take the artificial intelligence courses offered. “It was considered this very futuristic thing, never going to happen. Self-driving cars, personalized ads — even that was considered so hypothetical,” she said.

Of course, those things are all around us today — but in truth, rules and regulation around them are still not baked-out. Legislators are still hashing out ethics around consumer data parsing that informs personalized ad algorithms. For instance, the General Data Protection Regulation in Europe is arousing dissent over enforceableness and other details. And the safety of autonomous vehicles that use AI to navigate roads is still hotly debated.

“At the end of the day, we are building AI; we have the power to shape it the way we want,” Ammanath said. Human tweaking is crucial to polishing AI technology, she said. People can mindfully select who tunes AI algorithms and how, as well as erect judicious policies around their use or misuse.

Machine intelligence on trial

Bringing domain experts into the fold when developing industry-specific AI tools should become standard practice, Ammanath believes. For example, an AI product geared toward legal use cases should not be finished and shipped without expert input from a lawyer. “The domain experts have to be involved — and today, that’s not happening,” she said.

The 2016 case of Eric Loomis sparked controversy over the use of algorithms in the justice system. A Wisconsin court sentenced Loomis to six years in prison for his role in a drive-by shooting. The verdict was informed in part by Northpointe Inc.’s COMPAS software program. COMPAS uses algorithms to predict defendants’ likelihood of re-offending; judges use the scores to make sentencing and parole decisions.

Loomis challenged the decision on the basis that he did not have access to the proprietary software algorithm owned by a private company. The state supreme court ruled against him, arguing that knowing the algorithm’s final decision provided adequate transparency.

American Chief Justice John Roberts later stated that algorithm-assisted decision-making such as that in the Loomis case was “putting a significant strain on how the judiciary goes about doing things.”

Some believe we should welcome the use of AI in courts and other realms where human emotion and bias can swing decisions to disastrous effect. “While humans rely on inherently biased personal experience to guide their judgments, empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI,” wrote Caleb Watney, technology policy associate at the R Street Institute, in a Brookings Institution post.

A policy simulation from the National Bureau of Economic Research showed that machine learning technology in legal cases could cut crime 24.8 percent with no increase in incarceration.

Whose AI is it anyway?

AI and machine learning algorithms are not inherently free of bias, however, Ammanath warns. A legitimate fear exists that the tech elite who are shaping AI will build their biases into the models, which will in turn affect end users, she explained.

“It’s important that we be transparent about the training data that we are using and are looking for hidden biases in it; otherwise we are building biased systems,” said Google’s AI chief John Giannandrea recently, as quoted by MIT Technology Review. “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”

Rooting out bias throughout development should be a priority of technologists, according to Ammanath. “How do we truly democratize AI so that we get different view points?” she asked.

Fei-Fei Li is chief scientist of artificial intelligence and machine learning for Google Cloud and director of the Stanford Artificial Intelligence Lab. She recently launched an educational nonprofit with Melinda Gates called AI4All.

AI4All reaches out to groups typically underrepresented in tech and introduces them to machine learning and AI development. The goal is to build a diverse pipeline into these fields through early training.

“AI is a technology that gets so close to everything we care about. It’s going to carry the values that matter to our lives, be it the ethics, the bias, the justice or the access,” Li told Wired earlier this year. “If we don’t have the representative technologists of humanity sitting at the table, the technology is inevitably not going to represent all of us.”

Initiatives like this along with the necessary measure of government oversight can keep AI from going rogue, according to Ammanath. “At the end of the day, AI is something that we own, and we should be able to build it with the right guardrails in place,” she concluded.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the HPE Discover EU event. (* Disclosure: TheCUBE is a paid media partner for the HPE Discover EU event. Neither Hewlett Packard Enterprise Co., the event sponsor, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission:    >>>>>>  SUBSCRIBE NOW >>>>>>  to our YouTube channel.

… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.