Video game streaming site Twitch on Monday stepped up efforts to fight inappropriate comments, introducing an artificial intelligence algorithm that it says can offer broadcasters more control over how controversial they want their comment streams to get.
Called AutoMod, the AI feature is unique because of its customization. Many online comment platforms can identify posts that contain threats and foul language, even preventing them from appearing on a site until a human moderator has a chance to review them. But AutoMod will let Twitch users decide how controversial they want their comments sections to be.
“For the first time ever, we’re empowering all of our creators to establish a reliable baseline for acceptable language and around the clock chat moderation,” Twitch Moderation Lead Ryan Kennedy said in a statement.
In addition to monitoring words, AutoMod can also filter out character strings or symbols that commenters sometimes use to avoid detection. It joins several existing moderation tools available to Twitch users, including the ability to create codes of conduct and assign monitoring responsibilities to one or more of their trusted followers. AutoMod will initially only be available in English, but Twitch is beta testing versions in more languages.
Twitch, an Amazon subsidiary, faces an inherently uphill battle when it comes to content moderation, since hardcore gamers aren’t known for holding back trash talk. Defining the line between trash talk and abuse, though, could help the company in its ambitions to venture beyond the gaming world to compete with the likes of YouTube and Facebook Live. Twitch credits an early version of AutoMod for cleaning up the comments sections of its video streams for the Democratic and Republican national conventions this past summer.