Ubisoft and Riot team up to research toxicity in online games

Ubisoft and Riot Games have partnered to investigate toxicity in online gaming spaces, and provide tools for the future.
ubisoft riot games research toxicity

Ubisoft (Assassin’s Creed) and Riot Games (League of Legends) have teamed up for a major research collaboration, with both companies working towards creating safer online spaces by regulating toxicity, and shaping moderation tools around in-game data. The new ‘Zero Harm in Comms’ research initiative is designed around collective action, and aims to ‘create a cross-industry shared database and labelling ecosystem that gathers in-game data, which will better train AI-based preemptive moderation tools to detect and mitigate disruptive behaviour.’

The data will be gathered from diverse online spaces, with insights then used as a basis to train worldwide AI moderation tools. As language changes online, this data will reflect current trends and the ‘disruptive behaviours’ of those creating toxic spaces for other players.

‘Disruptive player behaviours are an issue that we take very seriously but also one that is very difficult to solve,’ Yves Jacquier, executive director of Ubisoft La Forge said of the company’s goals in a press release. ‘Through this technological partnership with Riot Games, we are exploring how to better prevent in-game toxicity as designers of these environments with a direct link to our communities.’

Read: Apex Legends studio Respawn speaks out against developer harassment

Riot acknowledges that disruptive behaviour isn’t unique to online games, but believes that change needs to start somewhere. Toxic behaviour online can influence the actions of people in real-life social settings, and has a cyclical impacts on users.

The company previously took drastic action in shutting down cross-team chat (also known as general chat) in League of Legends in an effort to combat toxicity between players.

By identifying the root causes of this toxicity and working to create stricter bounds for moderation, Ubisoft and Riot Games hope to use their in-game data to create more ‘positive experiences in online spaces.’

As Ubisoft and Riot Games step up their efforts, fellow companies like Microsoft are also looking to boost their online transparency and safety. Recently, an Xbox-led transparency reporting initiative revealed the company had proactively disciplined around 4.3 million bot accounts in its online social spaces in 2022.

There’s hope that renewed attention on safety in these spaces will reduce toxicity overall, and create a sense of welcome for everyone. Ubisoft and Riot plan to share the results from their initial research phase in 2023.

Leah J. Williams is a gaming and entertainment journalist who's spent years writing about the games industry, her love for The Sims 2 on Nintendo DS and every piece of weird history she knows. You can find her tweeting @legenette most days.