As Facebook faces new critiques, it starts to rate its own users’ reputations
Facebook Inc. has come under more fire after a recent study from the University of Warwick has revealed where the platform is being used more, there are more hate crimes.
As reported in the New York Times Tuesday, the study looked at 3,335 anti-refugee attacks in Germany over a two-year period. Researchers scrutinized each community where the attacks took place, looking at things such as economic status, demographics, newspaper sales, far-right support, how many protests there had been there and how many hate crimes had happened there.
The results showed something similar in every case. Whether the community was affluent or poor, mainly liberal or aligned to the far right, the more Facebook was used, the more attacks seemed to happen in those communities.
“Their reams of data converged on a breathtaking statistic: Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50 percent,” wrote the Times. The researchers said Facebook, used as a digital nexus to inflame hatred, was responsible for a 10th of all antirefugee violence.
Hate crimes or possibly just the feeling of hate, may at times be inspired by exaggerated news or just plain lies that often appear on social media platforms. Facebook has already taken a number of steps to reduce the spread of such news to avoid being used as a “propagation mechanism” for hate speech. That was the expression used by the British researchers.
One of those steps, revealed Tuesday by The Washington Post, is to rate its users regarding their trustworthiness. In an interview, Tessa Lyons, the product manager in charge of assessing Facebook users’ reputations, said users were being rated as either trustworthy or not. It seems there’s no middle ground.
Lyons explained that when “fake news” is reported by a number of users, the story is then scrutinized by Facebook. If certain users often flag stories as containing erroneous facts and then Facebook finds that to be true, that user will gain a trust rating. If it turns out to be the opposite, that user will be deemed untrustworthy.
“People often report things that they just disagree with,” explained Lyons, so users were being assessed for their credibility. A trust rating for its users may come across to some users as perhaps invasive, but later in an email sent to Gizmodo, Facebook clarified what it was doing in perhaps a manner that will put users less ill at ease.
“The idea that we have a centralized ‘reputation’ score for people that use Facebook is just plain wrong,” said Facebook. “What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.”
Image: Thought Catalog visa Flickr
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU