UPDATED 14:03 EDT / FEBRUARY 21 2018

WOMEN IN TECH

Danger: High-voltage AI tech needs ethics and safety regulations, expert warns

Artificial intelligence is so loudly hyped by software vendors and the tech media nowadays, one would think its powers could make human brain cells obsolete. Venture capitalists poured $10.8 billion into artificial intelligence and machine learning technology companies in 2017, according to PitchBook Data Inc. But many thought leaders are saying that without human oversight and ethical constraints, AI could do serious damage to the individuals and businesses it’s supposed to help.

“What I like to say is, AI is not ready for solo flight,” said Dr. Shannon Vallor (pictured), professor of philosophy at Santa Clara University. It’s too bad for businesses eager to save money on human labor with automation, but their haste could wind up costing them in the end, she stated.

“We’ve seen — over and over again — that that leads again and again to disaster and to huge reputational losses to companies, often huge legal liabilities,” Vallor said. For this reason, companies ought to hurry and invest in human-AI connections and get on board with budding ethical standards, she explained.

Vallor spoke with Jeff Frick, co-host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during an interview at the Accenture Technology Vision event in San Francisco. This week, theCUBE spotlights Dr. Shannon Vallor in our Women in Tech feature.

Data and AI drift into the danger zone

The questions surrounding ethical consumer data usage and artificial intelligence are heating up as these technologies seep deeper into our lives. In years past, technology had an ancillary, accessory role for most people. “Increasingly, they are the medium through which we live our lives,” Vallor said. “They’re the ways that we find the people we want to marry. They’re the ways that we access resources, capital, healthcare, knowledge. They’re the ways that we participate as citizens in a democracy.”

The manner in which companies are wielding AI and data-related technologies does not always respect this reality. “Data is like jet fuel,” Vallor said. “No one who handles jet fuel treats it the way that some companies treat data. But today data can cause disasters on a scale similar to a chemical explosion.”

Echoing this sentiment, a new report from experts at Oxford University, the Centre for the Study of Existential Risk and elsewhere warns that AI has become a “clear and present danger.” The report, “The Malicious Use of Artificial Intelligence,” holds forth on the ways in which AI could compromise digital, physical and political security. Part of the danger ensues from the recent rapid progress in AI technologies’ efficiency, scalability and ease of diffusion.

“For many familiar attacks, we expect progress in AI to expand the set of actors who are capable of carrying out the attack, the rate at which these actors can carry it out, and the set of plausible targets,” the report stated. “In particular, the diffusion of efficient AI systems can increase the number of actors who can afford to carry out particular attacks.”

Consumers’ crisis of faith in technology

When a company blunders big time on something like data privacy or information rendering, it hits consumers where it hurts. The Equifax breach and fallout from Facebook’s political ads come to mind. The pocks left on companies reputations aren’t worth whatever benefits they may gain from fast-and-loose data or AI practices. “Those things don’t get forgotten in one news cycle,” Vallor said.

Cases like these have contributed to today’s low-trust atmosphere. In the past, people may have been wary of small, unknown businesses, but they believed in the Fortune 500 or blue-chip corporations. Now consumers feel little faith in any of them, Vallor said.

“The good news is that there are a lot more conversations happening about technology and ethics within industry circles,” Vallor stated. Various organizations are getting together to think through ethics quandaries around technology, design and development. For instance, the “big five” leaders in AI — including Google LLC and Amazon.com Inc. — formed the Partnership on AI to “benefit people and society” in 2016, which has since picked up dozens of other members. The AI Now Institute is also doing commendable work on the social implications of artificial intelligence, Vallor pointed out.

“This is a really groundbreaking movement that could potentially lead other industry participants to say, ‘Hey, we kind of have to get on board with this,'” she said.

These leaders must educate companies about the fact that AI is not a replacement for humans; in fact, the two do their most splendid work together. Getting the two to work together requires an investment in human skill-building. For instance, the handling of consumer data that goes into AI models can be loose at present, according to Vallor. “You have to make sure that the people handling it are properly trained, that they know what can go wrong, that they’ve got safety regimes in place,” she said.

There is no one-size-fits-all fix. Different industries and companies will require nuanced approaches. It’s more than worth the hassle, because irresponsibly developed or deployed AI can come back to bite companies, and their customers. “People can die; lives can be ruined,” Vallor said. “And people can lose their life savings over a breech or a misuse of data that causes someone to be unjustly accused of fraud or a crime.”

Vallor is not glossing over the difficulty of the task given the nascence and complexity of advanced big data and AI technology. “We’re never going to get a handle on all of it,” she said, adding that while it’s impossible to address every possible risk or forestall every possible disaster, the tech industry can do better than it’s doing now.

“I think there’s so much progress to be made that we don’t have to worry too much about the progress that we might never get around to making,” Vallor concluded.

Here’s the complete video interview, and there’s more SiliconANGLE and theCUBE coverage of the Accenture Technology Vision event.

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU