UPDATED 00:00 EDT / DECEMBER 22 2016

EMERGING TECH

Google researchers develop a test for machine learning bias

A team of researchers at Google Inc. has developed a method for testing whether or not machine learning algorithms inject bias, such as gender or racial bias, into their decision-making processes.

For some time, concerns have been raised about the possibility that machine learning algorithms are injecting bias into applications such as advertising, credit, education, employment and justice. Recent examples include a crime prediction algorithm that targeted black neighborhoods and an online advertising platform that was found to show highly paid executive jobs to men more often than women.

“Decisions based on machine learning can be both incredibly useful and have a profound impact on our lives,” said Moritz Hardt, a senior research scientist at Google, who co-authored the paper, “Equality of Opportunity in Supervised Learning.” “Despite the demand, a vetted methodology for avoiding discrimination against protected attributes in machine learning is lacking.”

Hardt said a “naive approach” might be to require the algorithm to ignore all protected attributes such as race, color, religion, gender, disability or family status. “However, this idea of ‘fairness through unawareness’ is ineffective due to the existence of redundant encodings, ways of predicting protected attributes from other features,” he wrote.

The test devised by Hardt and his colleagues is aimed at predictive machine learning programs that try to forecast the future by ingesting massive quantities of data. The problem with such algorithms is that the decision-making process is not created by humans, but by the algorithm itself. As such, the logic behind those decision-making processes isn’t always clear, even to the people who created the algorithm.

Hardt and his team created a simple test that analyzes the data going into the algorithm, and the decisions that it makes, based on that data. Their theory is based on the idea that when an algorithm makes a decision about an individual, that decision should not reveal anything about that person’s gender or race.

One example the researchers offered: If men were found to be twice as likely to default on a bank loan as women, and an algorithm calculated that the best approach was to reject all applications from men and accept all applications from women, this would be considered inappropriate discrimination.

More details of the approach can be found in the paper here.

Image credit: WikiImages via Pixabay.com

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU