SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation
ICCV 2023
Abstract
Our goal is to synthesize 3D human motions given textual inputs describing simultaneous actions, for example ‘waving hand’ while ‘walking’ at the same time. We refer to generating such simultaneous movements as performing ‘spatial compositions’. In contrast to temporal compositions that seek to transition from one action to another, spatial compositing requires understanding which body parts are involved in which action, to be able to move them simultaneously. Motivated by the observation that the correspondence between actions and body parts is encoded in powerful language models, we extract this knowledge by prompting GPT-3 with text such as “what are the body parts involved in the action <action name>?”, while also providing the parts list and few-shot examples. Given this action-part mapping, we combine body parts from two motions together and establish the first automated method to spatially compose two actions. However, training data with compositional actions is always limited by the combinatorics. Hence, we further create synthetic data with this approach, and use it to train a new state-of-the-art text-to-motion generation model, called SINC (“SImultaneous actioN Compositions for 3D human motions”). In our experiments, we find training on additional synthetic GPT-guided compositional motions improves text-to-motion generation.
Video
More Information
Our models are available in the Download section for all the experiments done in the paper.
More information:
Citation
@article{SINC:2023,
title = {{SINC}: Spatial Composition of {3D} Human Motions for Simultaneous Action Generation},
author = {Athanasiou, Nikos and Petrovich, Mathis and Black, Michael J. and Varol, G{\"u}l },
journal = {ICCV},
year = {2023}
}
Contact
For questions, please contact nathanasiou@tue.mpg.de.
For commercial licensing, please contact ps-licensing@tue.mpg.de.