A team from EA and the University of British Columbia in Vancouver is using a technique called reinforcement learning, which is loosely inspired by the way animals learn in response to positive and negative feedback, to automatically animate humanoid characters.
EA said the results were promising.
Traditionally, characters in videogames and their actions are crafted manually. Sports games, such as FIFA, make use of motion capture, a technique that involves tracking a real person often using markers on their face or body, to render more lifelike actions in human characters. But the possibilities are limited by the actions that have been recorded, and code still needs to be written to animate the character. By automating the animation process, as well as other elements of game design and development, AI could save game companies millions of dollars while making games more realistic and efficient, so that a complex game can run on a smartphone, for example.
To make the character, the team first trained a machine-learning model to identify and reproduce statistical patterns in motion-capture data. The researchers then used reinforcement learning to train another model to reproduce realistic motion with a specific objective, such as running toward a ball in the game. Crucially, this produces animations not found in the original motion-capture data. In other words, the program learns how a soccer player moves, and can then animate the character jogging, sprinting, and shimmying by itself.
AI could generate content for other genres, including action and role-playing games. Some game companies are experimenting with procedural generation as a way to make games more expansive. A simple method is used to generate new worlds for players to explore in No Man’s Sky, a space-based survival game released in 2016. Togelius says AI is also emerging as a powerful way to test games and find bugs, using artificial players.