UPDATED 19:52 EDT / JUNE 21 2022

AI

Microsoft restricts access to controversial AI facial recognition technology

Microsoft Corp. says it will phase out access to a number of its artificial intelligence-powered facial recognition tools, including a service that’s designed to identify the emotions people exhibit based on videos and images.

The company announced the decision today as it published a 27-page “Responsible AI Standard” that explains its goals with regard to equitable and trustworthy AI. To meet these standards, Microsoft has chosen to limit access to the facial recognition tools available through its AzureFace API, Computer Vision and Video Indexer services.

New users will no longer have access to those features, while existing customers will have to stop using them by the end of the year, Microsoft said.

Facial recognition technology has become a major concern for civil rights and privacy groups. Previous studies have demonstrated that the technology is far from perfect, often misidentifying female subjects and those with darker skin at a disproportionate rate. This can lead to big implications when AI is used to identify criminal suspects and in other surveillance situations.

In particular, the use of AI tools that can detect a person’s emotions has become especially controversial. Earlier this year, when Zoom Video Communications Inc. announced it was considering adding “emotion AI” features, the privacy group Fight for the Future responded by launching a campaign urging it not to do so, over concerns the tech could be misused.

The controversy around facial recognition has been taken seriously by tech firms, with both Amazon Web Services Inc. and Facebook’s parent company Meta Platforms Inc. scaling back their use of such tools.

In a blog post, Microsoft’s chief responsible AI officer Natasha Crampton said the company has recognized that for AI systems to be trustworthy, they must be appropriate solutions for the problems they’re designed to solve. Facial recognition has been deemed inappropriate, and Microsoft will retire Azure services that infer “emotional states and identity attributes such as gender, age, smiles, facial hair, hair and makeup,” Crampton said.

“The potential of AI systems to exacerbate societal biases and inequities is one of the most widely recognized harms associated with these systems,” she continued. “[Our laws] have not caught up with AI’s unique risks or society’s needs. While we see signs that government action on AI is expanding, we also recognize our responsibility to act.”

Analysts were divided on whether or not Microsoft’s decision is a good one. Charles King of Pund-IT Inc. told SiliconANGLE that in addition to the controversy, AI profiling tools often don’t work as well as intended and seldom deliver the results claimed by their creators. “It’s also important to note that with people of color, including refugees seeking better lives, coming under attack in so many places, the possibility of profiling tools being misused is very high,” King added. “So I believe Microsoft’s decision to restrict their use makes eminent sense.”

However, Rob Enderle of the Enderle Group said it was disappointing to see Microsoft back away from facial recognition, given that such tools have come a long way from the early days when many mistakes were made. He said the negative publicity around facial recognition has forced big companies to stay away from the space.

“[AI-based facial recognition] is too valuable for catching criminals, terrorists and spies, so it’s not like government agencies will stop using them,” Enderle said. “However, with Microsoft stepping back it means they’ll end up using tools from specialist defense companies or foreign providers that likely won’t work as well and lack the same kinds of controls. The genie is out of the bottle on this one; efforts to kill facial recognition will only make it less likely that society doesn’t benefit from it.”

Microsoft said that its responsible AI standards don’t stop at facial recognition. It will also apply them to Azure AI’s Custom Neural Voice, a speech-to-text service that’s used to power transcription tools. The company explained that it took steps to improve this software in light of a March 2020 study that found higher error rates when it was used by African American and Black communities.

Image: Macrovector/Freepik

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU