UPDATED 22:58 EDT / APRIL 14 2021

POLICY

Twitter to study its algorithms for unintentional harms

Twitter Inc. announced today that it will study its machine learning algorithms in an effort to ascertain whether they cause unintentional harm.

The company said that over the next few months as part of its “Responsible Machine Learning Initiative,” it will try to understand if there is racial or gender bias.

Last year the company was criticized for racial bias after its image-cropping algorithm seemed to choose white faces over black faces. Twitter said it had been tested first for bias and it seemed to be OK, but admitted that further testing needed to be done.

The reason for such an algorithm is so the most important parts of a picture can be cropped, therefore taking out unnecessary elements and freeing up more space on the platform. Users experiments with the algorithm later seemed to show more absences of darker-skinned people.

“It’s clear from these examples that we’ve got more analysis to do,” the company said in response. Twitter’s algorithm has also been accused in the past of favoring men’s posts over women’s.

“Responsible technological use includes studying the effects it can have over time,” Twitter said in a blog post today. “When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product.”

To ensure fairness, Twitter said, it will analyze the algorithms internally, employing engineers, researchers and data scientists across the company dedicated to machine learning and “ethics, transparency and accountability.” Those people will study three main areas:

  • A gender and racial bias analysis of our image cropping (saliency) algorithm.
  • A fairness assessment of Twitter’s home timeline recommendations across racial subgroups.
  • An analysis of content recommendations for different political ideologies across seven countries.

Twitter said once the analyses are finished, if bias is found, algorithms could be removed. Another thing that could happen is what Twitter called “algorithmic choice,” meaning people would have the opportunity to shape what they see on the platform. “We’re currently in the early stages of exploring this and will share more soon,” the company said.

The move comes at a time that social media companies have come under intense pressure for using algorithms that create the most engagement but may also create divisions in society. In response to this criticism, Facebook Inc. last month revealed that it was also going to give people more control over what they see in their news feed.

Photo: Esther Vargas/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU