I talked to Nvidia’s AI NPC: It’s impressive, incredible, a moral nightmare, and inevitably here to stay – whether you like it or not

The Boss

I talked to Nvidia’s AI NPC: It’s impressive, incredible, a moral nightmare, and inevitably here to stay – whether you like it or not

impressive, incredible, inevitably, Moral, Nightmare, NPC, Nvidias, stay, talked

Usually, when you go to visit one of the big hardware giants, the demos are focused on… well, you know – the hardware. However, this has been changing over the years as specialized software has become an increasingly important part of the computer hardware battlefield.

Nvidia has taken a big lead in this regard. When you buy an RTX series GPU, you’re buying it for more than just raw rasterization capabilities, that is, how many pixels and frames it can push with maximum visual capabilities enabled. You also buy it for specific software implementations that will aid your game’s rendering – like DLSS and Frame Generation, which boost frame rates without sacrificing visuals, or Reflex, which aims to reduce input lag .

All of this is of course part of a larger graphics arms race. When one manufacturer adds a feature, another manufacturer counters with an equal or better alternative. As the race deepens, Nvidia sees an opportunity to synchronize gaming with its other main interest: artificial intelligence.

The biggest example of how various branches of AI can come together to create something very real and important to gaming is Nvidia ACE, a system that basically generates AI-driven NPCs. Typically, when you talk to NPCs in a game, they are written by humans, executed by humans, and therefore have a limited set of phrases and voice prompts. Nvidia dares to ask: What if AI programmed NPCs on the fly?

Of course, the most natural reaction is to roll your eyes. And then here’s the rub: Well, yes, but what about the damn people who worked on bringing these characters to life? Without human—or at least living, breathing—artists, there is no art. I’m sure of this. But let’s go ahead and ask – just how good is this? Are these jobs threatened by GPU-enhanced AI?

Nvidia’s response is that creating these characters still requires artistry. During an actual demonstration of the AI ​​chatterbox, company representatives seemed to pre-empt such criticism by pulling out parts of the backend of the Inworld Engine, the technology powering the demonstration. On the backend we can see the details of one of the characters you can talk to. This demonstrates both the artistry still involved and the versatility of artificial intelligence.

Is this really the future?Watch on YouTube

Basically, you enter information about the role in question. This can be a few lines summary of them, or a really detailed description of their life history, opinions, likes, dislikes, loves, relatives – whatever you want. With this information defined by humans, the rest is given to the AI, which pulls from that profile and the more general knowledge it has about the game world in order to formulate a response. These responses are then communicated back to the player via text-to-speech settings.

Regardless of my moral feelings about it all, I’m not going to lie about the end result: As an initial tech demo, it’s impressive. The demo had us attending a GDC-like tech conference trying to track down a specific character. In the hotel lobby, there are three people to talk to—a bellboy, the registrar, and a conference guest, an ambitious tech CEO type. Each has their own personality – and their own “mission,” so to speak, in terms of what they need to impart to the player.

Nvidia representatives talked a lot about “guardrails” in this regard. The free form of chatting with an AI using a real voice means your mind can wander – so the AI ​​is severely restricted in what it can reveal, how it speaks, etc. I mentioned real-life cases of airline customer support chatbots promising refunds after having tortuous conversations with customers, which technically violated company policy; can you trick an NPC with a sly enough tongue into giving up information they shouldn’t reveal? ? If the character is well-defined, Nvidia representatives will reject it. The same goes for making sure characters don’t do anything offensive or inappropriate.

The result is an oddly stilted conversation, but one that feels really natural – in fact, even if you ask the same question multiple times, you’ll get different ones yes very interesting. You start to notice the AI ​​clinging to specific elements of their biography. Waiter Tae dreams of owning his own bar with his own signature cocktails – something he mentions almost every chance he gets. When I asked him questions during demos about his life, his feelings, and my mission, he always found a way to get back to the cocktails. But when he was asked for the ingredients, he couldn’t provide them – it might not be in his biography, or the AI ​​was programmed not to give clearer instructions about drinking, lest the end user need to pump their stomach.

Tae is one of three AI-driven NPCs you can meet in the demo.Like me, he was addicted to alcohol

Tae is one of three AI-driven NPCs you can meet in the demo. Like me, he was addicted to alcohol | Image Source: Nvidia

Sometimes the boundaries of gameplay design and artificial intelligence rub against each other, and that friction can create an oddly weird feeling. At one point, you’re encouraged to invent a reason why a tech conference keynote was postponed in order to advance the narrative. This could be something as simple as an electrical issue, someone being late, or anything else. But I couldn’t control myself: I told Diego, the CEO, that there was a bomb threat. He panicked and went straight to the hotel receptionist to ask if this was true, but his conversation with her was calm and only brought up the fact that the keynote might be delayed, not the fact that we could be bombed at any moment.The reason is simple: Diego and I Generated by AI; but his chats with other NPCs are pre-arranged and therefore less passive.

But there’s no denying that as a piece of technology, it’s impressive. Talking to the characters in my real voice, everyone understood my drawl with a regional British accent, every interaction was unique and each exchange provided a better understanding of each character.It’s limited in scope, but once you understand the expected scope and take a step back – yeah, it’s all just work. It’s a bit difficult now, but as a proof of concept, everything you need is there.

If we separate technology from art for a moment, one can definitely see that this is the future.However, one can also see how it has million miles In terms of human factors. Lacks expression, emotional intelligence and elements of fun. The soul is missing—not just in the slightly robotic text-to-speech conversion, but in all of it.

That being said, I could imagine it being a good fit. For example, in a TV-inspired kick, I played Fallout 4 again – I can imagine emerging AI quest designs distributed by AI-driven NPCs, in addition to the existing carefully curated, human-written main In addition to missions and side missions. In theory, you have access to an endless supply of procedurally generated quests via a “fully voiced” AI quest giver. In fact, Bethesda already has these – they’re called “Radiant Quests” – but you can easily imagine how they could be enhanced with AI NPCs.

Clarksburg's powered armor location in Fallout 76.

Could Fallout 76’s launch be improved with AI-driven NPCs… rather than no NPCs? | Image Source: iGamesNews/Bethesda

I’d like to take a pragmatic approach to this: Artificial Intelligence is potentially highly dangerous for the art of game making and the artists who provide us with many aspects of gaming. But let’s also face it: it’s inevitably part of the future. It only takes a few minutes of demonstration to show you this. Developers and publishers will want to use this technology.

So, as with any technology, the question is how to use it. Even with the characters uncannily looping and displaying the lip-flapping sounds of previous generations that clearly couldn’t keep up with fully realistic synchronization to words first formed a fraction of a second earlier, and even with the occasional processing lag and even the odd audio stutter… it all works. Very good, very impressive. I can think of about a hundred different potential uses for this technology: and not all of them are evil.

And, in a sense, I was relieved. Nvidia ACE’s artificially intelligent NPCs are impressive…but they don’t feel real. Like anything in the game, it’s all smoke and mirrors. It offers a deal compared to traditional game writing. In games like Mass Effect or Fallout, what NPCs can say is relatively shallow, limited to a handful of dialogue options and answers per interaction – but each option is thoughtfully crafted and uniquely human. Beautifully presented.

This is in exchange for breadth – giving you an NPC that you can talk to in a variety of different ways over a period of time. The characters still feel alive, but they lack a sense of humanity, even if they’re still more personable than those silly refund chatbots. For now, that means artists’ jobs are safe. The question now is how this technology intersects with their jobs—helping them, not replacing them. That’s a future I can buy.

Leave a Comment