Since the beginning of internet gaming, there have been players who take their competitive nature too far and become abusive. Finding a way to ensure that games are free of this kind of overly-competitive (or downright hateful) rhetoric has been a challenge for developers. Blizzard, for its part, has deployed AI and machine learning to police game chats in Overwatch—and it seems to be working.

Monitoring and policing chat seems to be a focal point this year—if not a priority. The PS5, for example, will famously allow users to record voice chats to report abusive players and Steam has added more chat filters to help people avoid the most common slurs and profanities.

Related: Blizzard GearFest Starts Today With Nearly 50 Brand New Products

However, these methods can be ham-fisted and programmed to censor words that are subversive, rather than abusive. While machine learning is not impervious to mistakes, it's theoretically less likely to make them—if it has enough information anyway. In a "fireside chat" on YouTube, Blizzard president J. Allen Brack explained how the AI works and what it's doing for Blizzard properties.

The AI helps Blizzard to verify player reports of abusive behavior and language as well as issue punishments with varying levels of severity. It does also act as a chat filter, enabling the company to ensure that text-based communications are free of abusive language. Brack reports that the company has implemented the technology in public WoW channels, and—as a result—has cut down on disruptive player time by half.

Overwatch has increased player penalties for abusive behavior and has added more sophisticated profanity filters to customize player experiences. Brack said of the system, "These are small steps, but they can add up to lasting change. Combating offensive behavior and encouraging inclusivity in all of our games and our workplaces will always be an ongoing effort for us."

That effort will undoubtedly be assisted with the use of machine learning, which will continually look for patterns in player reports—infinitely refining its algorithm to identify what it believes will be considered offensive or abusive behavior. Time will tell just how effective the software really can be.

Up Next: Brennan Lee Mulligan Recommends These Five RPG Podcasts