Activision partners with Modulate to combat in-game toxicity with AI-powered voice chat moderation
Activision Blizzard Inc. is taking a stronger role in dealing with voice chat toxicity in its games with a partnership with Modulate, the developer of machine learning voice technology that can evaluate the safety of online chats.
The new technology, announced Wednesday, will be deployed across North America in Activision’s first-person shooter titles “Call of Duty: Modern Warfare II” and “Call of Duty: Warzone.” “Call of Duty” is one of Activision’s most popular titles and “Modern Warfare II” was the best-selling release for the company in its first weekend. It generated more than $800 million in revenue in only three days during its November 2022 launch, hitting a peak concurrent player count of 169,000 players on Valve Corp.’s Steam platform alone.
Online gaming platforms have often struggled with what could be termed as “toxic behavior” in online game chat in both text and voice where players heap abuse on one another. This can happen through cyberbullying, hate speech, discriminatory language and harassment. It can especially arise amid the high intensity and adrenaline of competition.
The new voice chat moderation system will use ToxMod, a system from Modulate powered by artificial intelligence that can identify toxic behaviors in real time and assist with enforcing them. The technology will help Activision’s anti-toxicity team, which already tracks both text and voice chat by filtering across 14 different languages for in-game chat.
According to Modulate, ToxMod operates by applying advanced machine learning to flag voice chat by analyzing the nuances of the conversation so that it can tell the difference between “f- yeah!” and “f- you!” as well as the emotions of the user by looking at the conversational context and reactions of other players.
In addition to helping detect toxic speech and bullying, the company also recently added “risk categories,” which allow the machine learning algorithm to detect behavior trends over time that suggest a significant risk of violent radicalization or child grooming.
The new risk categories take in larger time scales than the real-time moderation that uses simple utterances and attempts to use patterns of intent to flag behaviors. This way moderators get a wholistic view of player behavior on a timeline to see if there’s something they might need to deal with in their community.
“There’s no place for disruptive behavior or harassment in games ever,” said Michael Vance, chief technology officer of Activision. “Tackling disruptive voice chat particularly has long been an extraordinary challenge across gaming. This is a critical step forward to creating and maintaining a fun, fair and welcoming experience for all players.”
Game developers such as Riot Games Inc., the publisher of highly competitive esports game “League of Legends,” and others have long faced toxicity in their games and struggled to curb the behavior in chat. To help combat these behaviors, Riot and Ubisoft Entertainment SA, a French video game publisher with a large number of games, partnered to build a database for AI-powered moderation to help deal with toxic language and behavior.
“This is a big step forward in supporting a player community the size and scale of ‘Call of Duty,’ and further reinforces Activision’s ongoing commitment to lead in this effort,” said Modulate Chief Executive Mike Pappas.
Activision boasted that since the launch of “Modern Warfare II,” “Call of Duty’’s existing moderation team has banned voice or text chat for more than 1 million accounts using their current technology. From the company’s own research when filtering is combined with reporting, 20% of toxic players do not reoffend after receiving a first warning. Those who do suffer further penalties such as restrictions from text and voice chat or temporary loss of account privileges.
As in many other AI-based moderation systems, there is always the risk of false positives. As a result, ToxMod doesn’t take enforcement action on its own. It triages and analyzes voice chats and sends them along to the moderation team. It will be working alongside existing tools such as filters and player reports to give the moderation team a better way to understand what’s happening, and the teams still have the final say.
The system is currently being rolled out for titles in North America but is planned for a worldwide release alongside the launch of “Call of Duty: Modern Warfare III” on Nov. 10. It currently supports the English language, but additional languages will follow at a later date.
Photo: Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU