UPDATED 23:25 EDT / APRIL 17 2019

AI

Microsoft won’t share facial recognition technology with police, citing human rights concerns

While China doubles down on its efforts to track its citizens and concerns fester in the U.S. that a similar Orwellian super-surveillance might happen in the country, Microsoft Corp. may have just alleviated some of the concerns.

Reuters reported Tuesday that Microsoft had been asked by law enforcement in California to share its facial recognition technology. The company said no, saying it feared the artificial intelligence could breach human rights.

It appears that police wanted the AI installed in cars and on officer’s body cams, but according to the report, Microsoft said there was a good chance the software would be biased. The company added that because the tech had been trained on mostly white men, it could very well lead to a disproportionate number of women and minorities being taken in for questioning.

“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said at an event at Stanford University. “We said this technology is not your answer.” The conference’s topic was “human-centered artificial intelligence.”

Notwithstanding some of the populace fearing a future of omnipresent surveillance cameras, there have been many reports of facial recognition technology just getting it wrong. As for people of color, the technology has shown to produce false positives in minorities, prompting some to accuse tech firms of writing “racist code.”

If any firm has taken a beating for its willingness to export its sometimes-faulty facial recognition technology to authorities, it’s Amazon.com Inc. It was revealed last year that the company was selling  its Rekognition technology to law enforcement, prompting the American Civil Liberties Union and various civil-rights advocacy groups to accuse Amazon of helping to create a surveillance state in the U.S. Next month company shareholders will have a chance to vote to ban facial recognition development and embracing government regulation, though it’s likely to be largely symbolic.

Like many others in the tech industry, Smith has been outspoken in saying that there should be full transparency in the development of AI; that it’s shortcomings should be well documented and ethics should always take precedence over the bottom line.

Nonetheless, Smith did admit that Microsoft had supplied the prison system with the technology. He said this was only because the environment was limited and the company believed it could help reduce violence in prisons.

Smith said that without regulation and human rights in mind, developing facial recognition and similar AI was a frivolous game. He added that winning a race like this would only mean winning a “race to the bottom.”

Image: Justin Pickard/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU