Automatic Generation of Character Animations Expressing Music Features
In this paper, we propose to use procedural animation of a human character to enhance the interpretation of music. The system consists of a procedural motion generator which generates expressive motions according to music features extracted from a music input, and uses Dynamic Programming (DP) to segment a piece of music into several music segments for further planning of character animations. In the literature, much animation research related to music uses reconstruction and modification of existing motions to compose new animations. In this work, we analyze the relationship between music and motions, and then use procedural animation to automatically generate expressive motions for the upper body of a human character to interpret music. Our experiments show that the system can generate appropriate motions for music of different styles and allow a user to modify system parameters to satisfy his/her visual preferences.
Category: Autonomous Digital Actor