𝐁𝐥𝐢𝐳𝐳𝐚𝐫𝐝 𝐭𝐞𝐬𝐭𝐬 𝐭𝐡𝐞 𝐮𝐬𝐞 𝐨𝐟 𝐦𝐚𝐜𝐡𝐢𝐧𝐞 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐭𝐨 𝐭𝐚𝐜𝐤𝐥𝐞 𝐮𝐧𝐰𝐚𝐧𝐭𝐞𝐝 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐮𝐬𝐞 𝐢𝐧 𝐎𝐯𝐞𝐫𝐰𝐚𝐭𝐜𝐡. 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐝𝐨𝐧𝐞 𝐢𝐧 𝐬𝐞𝐯𝐞𝐫𝐚𝐥 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞𝐬. 𝐈𝐧 𝐭𝐡𝐞 𝐥𝐨𝐧𝐠 𝐭𝐞𝐫𝐦, 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐦𝐮𝐬𝐭 𝐚𝐥𝐬𝐨 𝐛𝐞 𝐚𝐛𝐥𝐞 𝐭𝐨 𝐚𝐬𝐬𝐞𝐬𝐬 𝐦𝐨𝐫𝐞 𝐭𝐡𝐚𝐧 𝐣𝐮𝐬𝐭 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞, 𝐬𝐮𝐜𝐡 𝐚𝐬 𝐛𝐞𝐡𝐚𝐯𝐢𝐨𝐮𝐫.
Jeff Kaplan, game director of Blizzard's Overwatch shooter, says in an interview with Kotaku that the company is experimenting with machine learning and that it is trying to teach a system what unwanted language is. The goal of ai's bet is to be able to tackle this faster, without having to wait for a player to report it. It concerns the use of languages in different languages, such as English and Korean. At the moment, Blizzard is using the system to deal with the most blatant cases.
"In everything related to reporting and punishing players, you need to start with the most extreme examples and see how the rules can be adjusted," says Kaplan to the site. The detection of unwanted language would not analyze messages between friends directly. In the long term, it must also be possible to detect undesirable behaviour in the game. It is unclear how far Blizzard has come with this. Kaplan says: "That is the next step. For example, how do you know if Mei's ice cream wall in the spawn room has been built by a monkey?
The Overwatch team is also looking at ways to reward positive behaviour in the game. Together with companies such as Twitch and League of Legends-maker Riot Games, for example, it is part of the so-called Fair Play Alliance, which works on 'healthy communities' in online games. In LoL, for example, such a system already exists in the form of the Honor system. The use of machine learning for moderation also happens in other places, such as the reactions under New York Times articles based on a Google service called Jigsaw.