UPDATED 21:28 EDT / MAY 19 2021

APPS

Following an investigation, Twitter admits its image cropping algorithm is biased

Twitter Inc. today revealed the results of an internal investigation into its image cropping algorithm, following accusations that it’s biased.

In April this year, the company announced that it would conduct an analysis regarding its machine learning algorithms and whether they may cause unintentional harm. The investigation was called the “Responsible Machine Learning Initiative.”

Part of the initiative was to address concerns that its image cropping algorithm chose white faces over darker-skinned faces. What the algorithm does is crop images so only the most important parts remain, thereby freeing up space on the platform. The algorithm did just that, but reports started appearing saying it seemed to choose white faces over black ones.

Twitter’s tests indeed found that this was the truth. In a series of tests, the algorithm chose to crop to white women more than to black women 7% more of the time overall. It also chose to crop to white men over black men 2% more. Overall, there was a 4% difference in choosing white people over black people. The algorithm also had a thing for women in general, cropping out men in 8% more cases.

Another thing Twitter’s technology was accused of was keeping in the image parts of women that might titillate certain viewers, such as their legs and breasts. Twitter said, “We didn’t find evidence of objectification bias — in other words, our algorithm did not crop images of men or women on areas other than their faces at a significant rate.” It added that the perceived bias may have come about because sometimes the algorithm cropped out a woman’s head in preference for things such as a number on a sports jersey.

“Even if the saliency algorithm were adjusted to reflect perfect equality across race and gender subgroups, we’re concerned by the representational harm of the automated algorithm when people aren’t allowed to represent themselves as they wish on the platform,” said Twitter.

The upshot is that it has now been decided that in some instances algorithms are perhaps best not used and humans should make certain decisions. From now on when people post images when using Android and iOS, they will see a true preview of the image, that is, what others will see. If the algorithm has chosen to crop the image, the user will see how and can make adjustments if necessary.

Photo: Alexander Shatov/Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.