Xbox is introducing new AI solutions to protect gamers from unwanted messages as part of its diverse approach to security

oriXone

Xbox is introducing new AI solutions to protect gamers from unwanted messages as part of its diverse approach to security

approach, Diverse, gamers, Introducing, messages, part, protect, security, solutions, unwanted, Xbox

As we at Xbox continue our mission to bring the joy and community of gaming to even more people, we remain committed to protecting gamers from disruptive online behavior, creating experiences that are safer and more inclusive, and remaining committed to our efforts To ensure this continues to be transparent, the Xbox community is safe.

Our fifth transparency report highlights some of the ways we combine player-centric solutions with the responsible use of AI to further strengthen our human expertise in detecting and preventing unwanted behavior on the platform, ultimately ensuring we continue to balance and meet the needs of our growing gaming community.

From January 2024 to June 2024, we focused our efforts on blocking disruptive message content from non-friends, spam and advertising detection, and introduced two AI-enabled tools that reflect our diverse approach to protecting players.

Key findings from the report include:

  • Balancing security and authenticity in messaging: We introduced a new approach to detecting and intercepting malicious messages between non-friends, contributing to a significant increase in disruptive content prevented. From January to June in total 19 million pieces Xbox Community Standards-infringing content has been prevented from reaching players via text, images and video. This new approach balances two goals: protecting gamers from harmful content sent by non-friends while preserving the authentic online gaming experiences our community enjoys. We encourage players to take advantage of the new Xbox Friends and Followers Experience, which gives them more control and flexibility when connecting with others.
  • Security increased through player reports: Player reporting continues to be an important part of our security approach. During this time, players helped us identify an increase in spam and advertising on the platform. We are constantly evolving our strategy to prevent the creation of inauthentic accounts at the source and limit their impact on both players and the moderation team. In April, we took action against a wave of inauthentic accounts (1.7 million cases, up from 320,000 in January) that were impacting players with spam and advertising. Players have helped us identify this spike and pattern by providing reports in Looking for Group (LFG) messages. Player reports for LFG messages doubled to 2 million and increased 8% to 30 million across all content types compared to the last transparency reporting period.
  • Our dual AI approach: We’ve released two new AI tools designed to support our moderation teams. These innovations not only prevent disturbing material from being exposed to players, but also allow our human moderators to prioritize their efforts on more complex and nuanced topics. The first of these new solutions is Xbox AutoMod, a system launched in February that helps moderate reported content. To date, 1.2 million cases have been processed and the team has been able to remove content affecting players 88% faster. The second AI solution we introduced was launched in July and works proactively to prevent unwanted communications. We’ve designed these solutions to detect spam and advertising and will expand them in the future to prevent additional types of damage.

Underlying all of these new advancements is a security system that relies on both players and the expertise of human moderators to ensure the consistent and fair application of our security system Community standardswhile improving our overall approach through a continuous feedback loop.

At Microsoft Gaming, our efforts to drive security innovation and improve our players’ gaming experience extend beyond the Transparency Report:

Prioritizing player safety in Minecraft: Mojang Studios believes that every player can do their part to make Minecraft a safe and welcoming place for everyone. To help with this, Mojang released a new feature in Minecraft: Bedrock Edition, which reminds players of the game’s Community Standards when potentially inappropriate or harmful behavior is detected in text chat. This feature is intended to remind players on servers of expected behavior and give them the opportunity to reconsider and modify their communications with others before an account suspension or ban is required. Elsewhere since then Official Minecraft server list Launched a year ago, Mojang has partnered with GamerSafer to help hundreds of server owners improve their community management and security measures. This has helped players, parents, and trusted adults find the Minecraft servers that are committed to the safety practices they care about.

Call of Duty Anti-Toxicity Tools Upgrades: Call of Duty is committed to combating toxicity and unfair play. To curb disruptive behavior that violates regulations Franchise Code of ConductThe team uses advanced technologies, including AI, to empower moderation teams and combat toxic behavior. These tools are specifically designed to foster a more inclusive community where players are treated with respect and compete with integrity. Since November 2023Over 45 million text messages were blocked in 20 languages ​​and the burden of voice poisoning fell by 43%. With the introduction of Call of Duty: Black Ops 6, The team introduced support for language moderation in French and German, in addition to existing support for English, Spanish and Portuguese. As part of this ongoing work, the team is also conducting research into prosocial behavior during gaming.

As the industry evolves, we continue to build a gaming community of passionate, like-minded and considerate gamers who come to our platform to enjoy immersive experiences, have fun and connect with others. We remain committed to platform security and creating responsible AI by design, led by Microsoft Responsible AI standard and through our collaboration and partnership with organizations like the Tech Coalition. Thank you, as always, for contributing to our vibrant community and joining us on our journey.

Some additional resources:

Leave a Comment