New York Times moderates comments using Google’s AI
In a perfect world, comment sections on news sites would be a great place to have rational discussions about what is happening in the world. Unfortunately, we do not live in a perfect world, and bitter disagreement and name calling often lead many sites to choose to lock comments entirely rather than deal with them.
The New York Times is looking to solve that problem through the use of artificial intelligence. Today, the Times is rolling out a new AI comment moderator built using Perspective, an AI application programming interface built by Jigsaw, which is a think tank that was spun out of Google Inc. parent Alphabet Inc. According to the Times, its new AI, which is simply called “Moderator,” will allow the site to open up comments for more articles.
The Times said in a statement that because of the labor-intensive process of manually moderating comments, it previously allowed comments on only about 10 percent of its articles. Now with the launch of Moderator, the publication says that it will be able to allow comments on 25 percent of its articles, and it hopes to eventually expand comments to up to 80 percent or more of its articles.
Rather than automating all of the moderation process, Moderator instead uses machine learning to help the Times’ human employees moderate more comments in less time. Using Google’s Conversation AI, Perspective rates comments on how likely they are to be considered toxic. The AI looks at a wide range of factors in each comment, including profanity, racial slurs and other inflammatory language. In addition to the Times, Perspective is also used for moderation on sites like Wikipedia and The Guardian.
AI moderation has become a growing trend in online communities, which until recently have had to rely on human moderators to keep tabs on discussions. For example, Amazon’s Twitch uses a similar tool called AutoMod to help live streamers get a handle on their chat rooms.
“League of Legends” developer Riot Games also uses AI and machine learning to limit toxicity in online games and fight player abuse. Riot’s AI flags potentially toxic messages and automatically warns users that their behavior is out of line and they could face penalties if they continue.
At Game Developers Conference 2015, Jeffrey Lin, former lead designer at Riot, said that not only does the AI restrict the amount of toxic content in the game, it also helps reform toxic players by calling out their behaviors. According to Lin, player reform occurs 50 percent of the time if they given the exact reason they are being banned, and he said that the reform rate rises to 70 percent if the player is also given evidence of the behavior.
The AI moderator used by the Times does not play any role in alerting users to their behaviors, but rather it makes it easier for human moderators to spot toxic comments. The Times’ human moderators make the final decision on what to do about those comments, which most likely would be simply deleting the comment and possibly banning the user.
Photo: Haxorjoe – Own work, CC BY-SA 3.0, Link
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU