Neural Network Creates Incredible Game Character Animations
Ashley Allen / 1 year ago
We’ve seen wondrous advancements in video game animations – and associated technologies – over the last decade, but, as fidelity increases, it becomes increasingly glaring when movements feel ‘off’; we have the Uncanny Valley to thank for that.
By now, most of us will have seen some of the hilariously broken walking animations in BioWare’s Mass Effect: Andromeda:
An engineer from Ubisoft Montreal, though, has developed an innovative new method of animating characters, and the results are incredibly impressive:
The designer, Daniel Holden, has eschewed the traditional method of blending pre-prescribed, fixed animation loops and sequences in favour of a neural network, capable of guessing the best movements to display based on a player’s input.
“We present a real-time character control mechanism using a novel neural network architecture called a Phase-Functioned Neural Network,” Holden explains. “In this network structure, the weights are computed via a cyclic function which uses the phase as an input. Along with the phase, our system takes as input user controls, the previous state of the character, the geometry of the scene, and automatically produces high-quality motions that achieve the desired user control.”
“The entire network is trained in an end-to-end fashion on a large dataset composed of locomotion such as walking, running, jumping, and climbing movements fitted into virtual environments,” he says. “Our system can therefore automatically produce motions where the character adapts to different geometric environments such as walking and running over rough terrain, climbing over large rocks, jumping over obstacles, and crouching under low ceilings.”
“Our network architecture produces higher quality results than time-series autoregressive models such as LSTMs as it deals explicitly with the latent variable of motion relating to the phase,” Holden adds. “Once trained, our system is also extremely fast and compact, requiring only milliseconds of execution time and a few megabytes of memory, even when trained on gigabytes of motion data. Our work is most appropriate for controlling characters in interactive scenes such as computer games and virtual reality systems.”
Holden has co-authored a paper on the method with Taku Komura and Jun Saito, entitled Phase-Functioned Neural Networks for Character Control [PDF], and the trio will be presenting their findings at computer graphics tech conference SIGGRAPH, which runs from 30th July to 3rd August in Los Angeles.