GDC 2024: Google's vision for AI 'living games' sounds awful, but inevitable
Google engineers described how Gemini 1.5 and Vertex AI will help make games "much more human" with hundreds of thousands of NPC chatbots with memories.
On Tuesday at GDC 2024, I attended Google's keynote session, "Reimagine the Future of Gaming with Google AI." Glancing at my neighbors' nametags, I saw engineers from Tencent, Epic Games, and other notable devs and publishers, all crammed into uncomfortable chairs to see Google's AI-tinted vision of how Android games can get better.
I'm used to people evangelizing doomed or premature concepts at industry events like GDC. However, this hour-long talk, given by a carousel of Google engineers and execs, sounded more like a helpful warning to the industry that AI-backed games are already happening and that they'd better get on board if they want to keep up.
Simon Tokumine, director of product management for Google AI, explained how game devs are using generative AIs like Google Gemini today for "ideation" when creating concept art, dialogue, and even early soundtrack treatments.
The next step, which Google thinks will arrive in the "next few years," is an influx of "living games." These games will have a similar model to live service games like Fortnite and Destiny 2, except that the updates will be "underpinned by generative AI," says Jack Buser, the director of games for Google Cloud.
What does this mean for Android games? Can GenAI really usher in a new era of gaming on par with "CD-ROMs and 3D graphics," as Buser promises? I'll run through the highlights of Google's vision of the future.
The next era of GenAI video games
Every game developer is looking for its next cash cow. This 2023 Game Development Report shows that 65% of surveyed development studios make live service games, and 30% plan to jump on the bandwagon. However, that same survey suggests that most developers don't have the resources to pull off the live service games that shareholders want.
Google's solution is to shift focus to living games that constantly update and create new content self-sufficiently, without the same labor demands.
Be an expert in 5 minutes
Get the latest news from Android Central, your trusted companion in the world of Android
Buser described how living games would respond to players' "implicit or explicit desires," adapting gameplay to address complaints or to emphasize popular features. He hinted that Google has "something coming very soon on that," possibly a new Gemini feature.
The panel also keyed in on GenAI-based non-playable characters (NPCs), something that other major tech companies like NVIDIA have focused on this week.
These tech companies, eager for a world where no one has to pay creatives like voice actors ever again, have begun showing off these futuristic tech demos that lip-sync to dialogue generated by large language models.
Buser says these "fully organic" NPCs, in the context of an MMO, will remember what past players asked and adjust their dialogue accordingly. They'll also be able to create LLM "chains" where an NPC can pull from other chatbots for information the player needs but doesn't have.
Dan Zaratsian, a Google AI/ML solutions architect, later described a vision of hundreds of thousands of GenAI-powered NPCs in a single world with individual character profiles, remembered player interactions, and dialogue with specific rulesets like the game's lore and history.
Without a trace of irony, Tokumine described how these computer-backed NPCs would make video game worlds "much more human, much more alive, and much more complete" than ever — despite the lack of actual people involved in making them.
Humans would need to be involved to ensure that the LLMs don't generate inappropriate or incoherent content, Google assured. But at the automated scale this LLM system describes, it's not entirely clear how any game could have enough editors to avoid any GenAI controversies or "financial issues."
Where Google Gemini fits in
Right now, Google uses its AI to auto-generate Play Store descriptions and improve its game marketing to players. And with the recent launch of Gemini 1.5, the generative AI can now handle up to one million tokens.
The Google team on stage described what this means for developers: You can now upload half an hour of video or three hours of audio to Gemini and "ask it what comes next." You can also upload 10,000 lines of raw code, plus a video of your engineers fixing errors, and have Gemini use that template to catch and fix other errors for you.
Lei Zhang, Google's director for Play Partnerships and Global GenAI, described how he uploaded "The Adventures of Sherlock Holmes" to Gemini 1.5 to create a Sherlock Holmes RPG with a story based on the detective's past adventures — something Zhang says the original Gemini 1.0 couldn't handle.
Beyond Gemini, the panel brought up Google Gemma, the new open-source AI that developers can license commercially. Google wants smaller developers to use Gemma for "rapid prototyping," world-building, dialogue, and other applications that would normally require more resources to implement on a large scale.
Larger developers are already using Google's Vertex AI, which Buser says can be "tuned" so that its algorithm focuses on the devs' own rules and data, with greater privacy and more powerful applications.
The problems with Google's GenAI vision
Right now, your favorite roguelikes use procedural generation to create randomized challenges and layouts from human-made templates. And, of course, games like No Man's Sky create vast galaxies of generated planets at a scale humans couldn't achieve by themselves.
If procedural generation expanded to include AI-based NPCs, that could make worlds feel more lived in than ever... but only if the LLMs backing them can become intelligent enough to work unsupervised and avoid hallucinations.
That also applies to "living games" that will change based on user feedback. Can future versions of Gemini actually be trusted to come up with updates that'll keep users engaged and on par with the customized, carefully conceived seasons and expansions that gamers have grown used to? Or is Google just taking creativity for granted as something their algorithm can eventually copy?
Google has hired musicians, artists, and lyricists to train its AI to spew out better results, but its AI almost certainly is trained off of unpaid artists' and writers' work found via Google Search. A public-domain Sherlock Holmes RPG is fine, but Google can't monitor if companies start feeding Gemini copyrighted material for "inspiration" that blurs the line with theft.
I couldn't say if the GDC 2024 attendees agreed with Google's vision or not. But like so many other gaming trends—like live service games—it seems like the kind of thing that executives will latch onto regardless of the problems due to the potential for profitable games without having to pay as much for labor.
Other major tech brands like Xbox and NVIDIA have their own AI panels coming up this week at GDC, so don't be surprised if AI becomes a regular fixture of gaming whether or not you want it.
With Google's partners specifically, don't be surprised if more and more Android games start using GenAI and LLMs to make randomized puzzles or new narrative NPCs at a regular cadence. This will give the best Android phones more games to play, but the level of quality and originality will be another question entirely.
Michael is Android Central's resident expert on wearables and fitness. Before joining Android Central, he freelanced for years at Techradar, Wareable, Windows Central, and Digital Trends. Channeling his love of running, he established himself as an expert on fitness watches, testing and reviewing models from Garmin, Fitbit, Samsung, Apple, COROS, Polar, Amazfit, Suunto, and more.