Ubisoft and Riot are teaming up on an anti-drug research project that will focus on collecting in-game chat logs as training data for AI algorithms. Both companies will publish their findings from the data next summer, at which time future steps will be determined.
Wesley Kerr, director of technical research at Riot, and Yves Jacquier, executive director of Ubisoft La Forge, spoke with me via Zoom call, and they both shared their goals and long-term hopes for the project. They hailed this as the first open research collaboration between the two gaming companies in AI, and they hope the learnings, published next year, will be a first step in the industry’s effective use of AI as a tool to reduce toxicity.
According to Jacquier, the project has three main goals. First, create a network of shared datasets containing fully anonymized player data. Second, create an AI algorithm that can process this data. Finally, for this partnership to be a “prototype” for future industry initiatives targeting toxicity, competition is encouraged and further progress is made in the field.
It would make sense that Riot and Ubisoft would be two companies investing in solving this problem, given their popular multiplayer games. Rainbow Six: Siege gets pretty nasty once teamwork takes a hit, with Riot’s troubled twins League of Legends and Valorant drenched in venom.
Kerr and Jacquier emphasized throughout the interview that player anonymity and compliance with regional laws and GDPR are their top priorities. When asked if player data is shared between companies, Kerr emphasized that your League of Legends account information will not be sent to other companies without the player’s consent. Instead, chat transcripts will be stripped of identifying information before any algorithm can pick and extract them.
When you hear AI curbing toxicity, the most immediate issue that comes to mind is player perseverance, their determination to let you know how trashy they are. The invention of new words, an ever-changing lexicon of trash talk, keeps changing in online communities. How can artificial intelligence respond? The trick, according to Jacquier, is not relying on dictionaries and static data sources. Therefore, use the value of the current player’s chat history that reflects the current toxicity meta.
Then there’s the other issue of misfire, especially in a medium where friendly banter between friends, random teammates, and even enemy players can be part of the experience. If I play top lane in League of Legends and I write “nice CS buds” to my 0/3 lane opponent, it’s just a bit of a joke, right? It would be uplifting if they did the same to me. It makes me want to win more and enhances the experience. How can artificial intelligence determine the difference between genuinely harmful toxicity and a joke?
“It was very difficult,” Jacquer said. “Understanding the context of the discussion is one of the hardest parts. For example, if a player threatens another player. In Rainbow Six, if a player says ‘hey, I’m going to take you out’, that could be part of the fantasy . And in other contexts, it can have very different meanings.” Kerr then touched on some of the benefits of video games in this regard, due to other factors.
According to him, considering who you’re in line for is one example of a factor that could help an AI determine the true toxicity of a funny joke. In theory, if you called your best friend for life shit in the halls of the league, you wouldn’t get hit by a stray dog.
As for the future, all eyes are on the results to be announced next year. For now, it’s only focused on chat logging, but with Riot Games looking into monitoring voice communications in Valorant if the partnership lasts until 2023, Kerr declined to discuss that as a future area of research. Currently, this is a blueprint. Both companies appear to be committed to travel, the first step in a long journey. While both Kerr and Jacquier hope the research project will yield important discoveries and inspire other companies to follow suit, they don’t see AI as a panacea for curbing toxicity.
“AI is a tool, but it’s not a panacea. There are many ways to keep players safe, so the idea is to better understand how best to use this tool to deal with harmful content.”
Ultimately, this study is just one component of a broader effort, but in Jacquier and Kerr’s minds, it will hopefully prove crucial in the future. Only time will tell if they’re right, if they keep their promise to protect player privacy, and if AI is indeed the next frontier in the fight against toxicity.