Most AI experiments have involved the software watching thousands of hours of a particular game and tracking the most successful moves and reactions in the course of a versus match.
Nvidia's latest experiment starts in similar fashion, as its AI research team trained a farm of four computers—each equipped with a Quadro GV100 workstation-grade GPU—on 50,000 hours of Pac-Man gameplay. A separate AI also played the game. Trained on this footage, the computers in question then turned around and created their own identical-looking clone.
Nvidia representative Hector Marinez said the AI didn't see any of [Pacman's] code, just pixels coming out of the game engine.
By watching this, it learned the rules Pacman's speed, movement abilities, and inability to go through walls; the four ghosts' movement patterns; what happens when Pacman eats a power pellet; and what happens when ghosts touch Pacman, super-charged or otherwise.
"Any of us could watch hours of people playing Pacman, and from that, you could potentially write your own Pacman game, just by observing the rules", Marinez said. "That's what this AI has done."
Marinez did not say what coding subroutine drove this AI's ability to write executable code of its own, nor whether it leaned on existing game engines.
Instead it said that it used three modules (memory, dynamics engine, and rendering engine) that run as neural networks.
Nvidia says this playable version of Pac-Man will be made available to the public "this summer", though the company would not clarify whether it would be distributed as a downloadable executable or served via a restricted cloud-gaming service like Nvidia's GeForce Now.
The team's researchers thought that these sorts of AI-driven routines could possibly aid the development of massive virtual worlds as tools to be directed by human staffers.
"We've created an AI agent that can learn the rules of the game just by observing it. Before you create a tool that puts content into the game, it needs to understand the rules."