Twitter announces new ‘Safety Mode’ to combat harassment
Twitter inc. announced today that it’s testing out a new safety feature that should cut down on trolling, something for which the platform is still notorious.
The “safety mode” can be activated by users in the settings menu, after which Twitter’s algorithm will attempt to spot harmful language or repetitive replies and mentions. If the troll-seeking technology flags a user, that account will be blocked for seven days from interacting with the user.
Twitter acknowledged that at times the function is bound to block someone for no good reason. If that happens, users will be able to see what has happened and undo the autoblock for that account. The technology is a work in progress, which Twitter says should improve over time.
Notably, the accounts that have been blocked from posting on certain accounts will not be told so. But if they go to the page, they’ll see the person is currently using safety mode and they’ll see a message saying they’ve been autoblocked.
“We want you to enjoy healthy conversations, so this test is one way we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations,” Twitter said in a blog post. “Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks.”
For years, Twitter has been known as a go-to hub for trolls, and although the company has tried to fix the issue, the task could be said to be Sisyphean. That was quite evident after some of the England soccer team’s black players took abuse from persistent offenders after the team lost the European Championship final. The matter became a cause celebre, putting more pressure on Twitter to make changes.
Twitter said in the testing stage the function will only be available to about 1,000 users, and only ones who use the English language. If the results are positive from this small feedback group, the feature should roll out gradually to more English-language users. Twitter said the first users of the feature will mostly be members of marginalized communities and female journalists.
Users may well know they’re about to send something that could be harmful since Twitter also recently introduced a feature that will tell them their words could be “harmful or offensive.” They will then have the option to send anyway, revise it or delete the message. In tandem with the new safety mode, perhaps the days of pervasive trolls on the platform are finally coming to an end.
Photo: Joshua Hoehne/Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
We really want to hear from you, and we’re looking forward to seeing you at the event and in theCUBE Club.