UPDATED 17:45 EDT / FEBRUARY 23 2017

EMERGING TECH

Google’s Perspective uses AI to fight Internet trolls

Internet trolls are notorious for saying some of the worst things imaginable online, things that no sane person would ever say in real life. But tracking and removing those comments manually could be a full-time job for web administrators.

Now, Google Inc. and its tech incubator Jigsaw have come up with an artificial intelligence solution that can do all of that grueling work. Today they released it to the world in the form of an application programming interface called Perspective.

Google and Jigsaw first mentioned Conversations AI, their AI comment moderator, back in September. Jigsaw President and founder Jared Cohen said at the time that he wanted “to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight.”

Cohen noted that toxic Internet comments can force many people to stay silent online in a form of self-censorship, and he said that the AI could “level the playing field.”

Perspective uses Conversation AI to spot online abuse and make it easy for human moderators to take action. The AI not only catches obvious abuse, but it can also score comments based on their perceived negativity, effectively allowing it to distinguish between harmless banter and genuine harassment.

On the website for Perspectives, Google demonstrates how publishers can use this score to filter comments on a sliding scale. For example, a publisher could allow only completely safe comments that have no negativity whatsoever, or it could allow everything except for the worst, most blatantly abusive comments.

Perspective also supports the ability to alert commenters in real time if the message they are about to post could be considered inflammatory, and it can even include the toxicity score of their comment as they type it. You can experiment with this feature on Perspective’s website.

While the intentions behind Perspective are good, there is the possibility that the technology could be used for censorship rather than to root out pointlessly toxic comments. Even Google’s own examples, which used controversial topics such as Brexit and climate change, show that legitimate comments could be filtered out for including any speech that is not G-rated.

Filtering out these negative comments might keep some conversations from getting too heated, but it could also create the illusion that any controversy is imagined, which might not be any better.

Image: Alphabet

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.