Gaming CEO shares nightmare scenario of using AI to spy on developers

At least one video game company has considered spying on its developers using a large-language model AI. The CEO of TinyBuild, which published hello neighbor 2 And tinykindiscussed it recently during a talk at this month’s Develop:Brighton conference, explaining how ChatGPT could be used to try and monitor employees who are toxic, at risk of burning out, or who just talk too much about themselves talk yourself.

“That was pretty bizarre black mirror-y for me,” admitted TinyBuild boss Alex Nichiporchik, according to a new report from WhyNowGaming. It described how transcripts from Slack, Zoom, and various task managers can be fed into ChatGPT with identifying information removed to identify patterns. The AI ​​chatbot would then apparently scan the information for red flags that could help identify “potential problem players on the team.”

Nichiporchik objected to WhyNowGaming designing the presentation and claimed so in an email to my city that he was talking about a thought experiment and not actually describing the practices the company is currently employing. “This part of the presentation is hypothetical. No one actively monitors employees,” he wrote. “I spoke about a situation where we were in the middle of a critical situation in a studio where one of the main actors was suffering from burnout. We were able to intervene quickly and find a solution.”

While the presentation may have aimed at the overarching concept of trying to predict employee burnout before it happens, thereby improving conditions for both developers and the projects they work on, Nichiporchik also appeared to have some controversial views The causes of behaviors are problematic and how can HR best identify them?

In Nichiporchik’s hypothesis, ChatGPT would monitor, among other things, how often people refer to themselves as “I” or “me” in office communications. Nichiporchik labeled employees who talk too much or talk about themselves in meetings as “time vampires.” “Once that person is no longer with the company or the team, the meeting lasts 20 minutes and we get through five times that,” he suggested during his presentation, according to WhyNowGaming.

Another controversial theoretical practice would be to ask employees for the names of colleagues with whom they have had positive interactions in the past few months, and then to flag the names of people who are never mentioned. These three methods, Nichiporchik suggested, could help a company “identify someone who is on the verge of burnout and who may be the reason the colleagues who work with that person are burning out, and you can fix the problem.” possibly identify and fix.” early on.”

This use of AI, theoretical or not, sparked a rapid backlash across the internet. “If you have to repeatedly prove you know how dystopian and terrifying your employee surveillance is, you could be the damn problem, man,” tweeted Mitch Dyer, author of Warner Bros. Montreal. “A magnificent and horrifying example of how uncritical use of AI leads those in power to take it at face value and internalize their biases,” tweeted Mattie Brice, associate professor at UC Santa Cruz.

Corporate interest in Generative AI has surged in recent months, prompting backlash from creatives around the world many different areas from music to gaming. Both Hollywood writers and actors are currently on strike after negotiations with film studios and streaming companies stalled over, among other things, how AI could be used to create screenplays or capture actors’ likenesses and use them forever.

Leave a Comment