UPDATED 16:41 EDT / DECEMBER 12 2016

EMERGING TECH

Amazon’s Twitch chat gets nicer with AI-powered moderation

Amazon.com Inc.-owned livestreaming platform Twitch announced today that it will be introducing a new moderation tool called AutoMod, which combines machine learning and natural language processing to make Twitch chat a little bit nicer.

Like most online communities, Twitch deals with its share of toxic users. Popular streamers often must appoint moderators from their communities to deal with livestream hecklers, but with chat channels sometimes including tens of thousands of users, this has become an all-but-impossible task for human moderators. Twitch hopes AutoMod will change that.

AutoMod works more like an assistant for human moderators rather than a replacement. When users submit a chat message flagged as inappropriate by AutoMod, they are automatically informed that their message will have to be reviewed first, and then their message is put into a queue for the channel moderators to approve or reject. This ensures that toxic messages will not sit in the chat while waiting to be caught by a moderator, but they also will not be automatically deleted without being checked.

According to Twitch, streamers can customize what sorts of messages AutoMod will filter based on four categories:

  • Identity language – Words referring to race, religion, gender, orientation, disability, or similar. Hate speech falls under this category.
  • Sexually explicit language – Words or phrases referring to sexual acts, sexual content, and body parts.
  • Aggressive language – Hostility toward other people, often associated with bullying.
  • Profanity – Expletives, curse words, and vulgarity. This filter especially helps those who wish to keep their community family-friendly.

AutoMod also allows streamers to control how strict their message filtering is with four different rulesets. The default Rule 1 setting filters out harsher identity language only, Rule 2 includes some filtering of sexual and aggressive language, Rule 3 filters even more sexual and aggressive language, and Rule 4 includes filtering of profanity and stricter filters for the other categories.

According to Twitch, AutoMod will only get better over time, and it will continue to gather data on the messages that moderators approve and reject using the tool. This data will than be fed back into AutoMod’s machine learning algorithms, reducing the chance that it flags legitimate messages by mistake while also helping it catch harmful messages that might have slipped under the radar otherwise.

Twitch notes that AutoMod is an opt-in setting, and the feature will not make any decisions on whether or not to ban or mute users in chat. Streamers and their moderators will still be ultimately responsible for deciding how to deal with all messages and toxic users.

For the moment, AutoMod only filters English language messages, but Twitch says that it will add other languages in the future as it continues to develop the tool.

Image courtesy of Twitch Interactive

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU