UPDATED 17:33 EDT / MARCH 26 2019

AI

AI’s big challenge: how to engineer in social responsibility

The current discussion around artificial intelligence is beginning to resemble the process of buying a shiny new car. There is a great deal of time and energy devoted to haggling over the cost and options before anyone actually gets to drive the vehicle and finds out what it can do.

At the center of the debate is whether AI has the proper guardrails to ensure responsible future use, as concerns continue around issues such as gender or racial bias in AI-driven facial recognition and hiring algorithms.

These concerns were further highlighted over the course of two days this week at EmTech Digital 2019, MIT Technology Review’s annual conference held in San Francisco. Interspersed between sessions on AI chipsets and robotics were weighty topics such as “Ethics in AI,” “The Human Impact of AI,” and “AI for Good.”

“AI has so much potential and many risks,” Harry Shum, executive vice president of Microsoft Corp.’s Artificial Intelligence and Research Group, said during a presentation on Monday. “We need to engineer social responsibility into the very fabric of the technology.”

Avoiding a machine-fueled war

Engineering responsibility into AI has proved to be a slippery slope. Microsoft recently dealt with this issue regarding its own work for the U.S. government.

In October, Microsoft President Brad Smith published a blog post that outlined the company’s philosophy toward supplying AI for use by the military. While Smith recognized that “no military in the world wants to wake up to discover that machines have started a war,” he also warned that people with the most knowledge about the technology should not abandon the conversation.

More recently, a group of Microsoft employees petitioned company executives to terminate a separate contract to supply augmented reality technology to the military. Microsoft Chief Executive Satya Nadella rebuffed the request.

“We absolutely will offer our technology to the U.S. government and the military,” Shum said when pressed on the firm’s position during a question-and-answer session at the conference Monday.

Pentagon locks up Google research

Supplying AI products to the military has been an issue for Google LLC as well. When it was revealed last year that the company was supplying its AI technology to the Pentagon for analyzing drone footage, Google responded to employee pressure by not renewing its military contract.

Stanford's Fei-Fei Li and Technology Review's Will Knight (Photo: Mark Albertson/SiliconANGLE)

Stanford’s Fei-Fei Li and Technology Review’s Will Knight (Photo: Mark Albertson/SiliconANGLE)

But the controversy has not gone away. On Monday, news came to light that the Pentagon has barred 5,000 pages of documents related to Google’s AI work, known as Project Maven, from public disclosure via the Freedom of Information Act.

Speaking at the EmTech gathering on Tuesday, Kent Walker, senior vice president of global affairs at Google, emphasized that his company was engaged in significant internal discussion around the use of AI and its potential impact on society.

In December, Google released details of a formal review structure to make decisions around what it deemed to be the “appropriate” use of AI. “We thought it was important to establish rigorous internal reviews,” said Walker, who cited a decision to roll out technology for lip reading but not release facial recognition tools as illustrative of how the company was working through the AI debate. “It’s an example of the kinds of discussions we are having every day.”

New AI advisory group

Google is also interested in broadening the AI dialogue beyond its walls. Walker announced on Tuesday that the company is forming an external advisory council to aid in the future deployment of AI. “The next chapter is to build on the work we’re doing with key stakeholders around the world,” Walker said.

One of the key executives involved in Google’s AI efforts, Fei-Fei Li, departed in 2018 after the controversy over Project Maven. Li is an experienced technologist in the AI field who became co-director of the Stanford Institute for Human-Centered Artificial Intelligence.

Despite leaving Google, Li has been unable to escape the controversy surrounding her profession. When the newly formed institute was launched on March 18, media reports noted that the 121 initial faculty members were predominantly white and male.

“This keeps me awake,” Li said Monday. “We do not have enough diversity and inclusion in this field.”

Progress in cars and construction

While ethics and diversity were prominent topics of discussion at the conference this week, there were also signs of progress on a few important AI fronts.

In the area of autonomous driving, Dmitri Dolgov, chief technology officer and vice president of engineering for Alphabet Inc.’s Waymo, reported that the company’s deployment of autonomous cars has been advancing rapidly. Waymo cars have logged 10 million miles on public roads in 25 cities, including Phoenix, where it has introduced a small-scale, autonomous ride-hailing service.

“Our cars don’t get tired, they don’t get distracted, they don’t text while driving, they don’t get drunk,” said Dolgov. “We’ve been adapting our system to use the most advanced algorithms. It’s not a matter of when, it’s not a matter of if, it’s a matter of how fast we can grow.”

AI is also making inroads in areas like the construction industry. Autodesk Inc. CEO Andrew Anagnost described how his company gathers data from RFID tags, site monitoring information by drones and inspection checklists to improve building design and construction using AI.

“Construction is a sloppy, poorly managed, low-precision process,” Anagnost said. “We collect mountains of data on construction sites now. If we can take that information and layer on actionable insights, we can make a major change in how people do things.”

Despite the concerns around responsible use of AI that dominated much of the conference, researchers seemed optimistic about the field’s potential.

“There’s a built-in tension to that notion of responsible innovation, responsible AI,” Google’s Walker acknowledged. “Yet AI has the potential to address some of the biggest challenges that we face.”

Image: geralt/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU