MOSPA: Human Motion Generation Driven by Spatial Audio

MOSPA

Abstract

Enabling virtual humans to dynamically and realistically respond to diverse auditory stimuli remains a key challenge in character animation, demanding the integration of perceptual modeling and motion synthesis. Despite its significance, this task remains largely unexplored. Most previous works have primarily focused on mapping modalities like speech, audio, and music to generate human motion. As of yet, these models typically overlook the impact of spatial features encoded in spatial audio signals on human motion. To bridge this gap and enable high-quality modeling of human movements in response to spatial audio, we introduce the first comprehensive Spatial Audio-Driven Human Motion (SAM) dataset, which contains diverse and high-quality spatial audio and motion data. For benchmarking, we develop a simple yet effective diffusion-based generative framework for human MOtion generation driven by SPatial Audio, termed MOSPA, which faithfully captures the relationship between body motion and spatial audio through an effective fusion mechanism. Once trained, MOSPA can generate diverse realistic human motions conditioned on varying spatial audio inputs. We perform a thorough investigation of the proposed dataset and conduct extensive experiments for benchmarking, where our method achieves state-of-the-art performance on this task. Our model and dataset will be open-sourced upon acceptance.

Publication
In NeurIPS 2025
Zhiyang Dou
Zhiyang Dou
MPhil, Aug. 2020 - Apr. 2026.
Co-supv. by Prof. Wenping Wang.
Mingyi Shi
Mingyi Shi
PhD, Nov. 2020 – 2026 (expected).
Liang Pan
Liang Pan
PhD, since Sep. 2025.
Leo Ho
Leo Ho
MPhil, since Aug. 2024.
Taku Komura
Taku Komura
Professor

Related