Morpheus: Neural-driven Animatronic Face with Hybrid Actuation and Diverse Emotion Control

RSS 2025


Zongzheng Zhang*1,2,    Jiawen Yang*1,    Ziqiao Peng1,    Meng Yang4,   
Jianzhu Ma1,    Lin Cheng5,    Huazhe Xu3,    Hang Zhao3 and Hao Zhao1,2

1Institute for AI Industry Research (AIR), Tsinghua University,    2Beijing Academy of Artificial Intelligence (BAAI),   
3Institute for Interdisciplinary Information Sciences(IIIS), Tsinghua University,    4MGI Tech, Shenzhen, China, 5Beihang University

* equal contributions   corresponding author  



Abstract


Previous animatronic faces struggle to express emotions effectively due to hardware and software limitations. On the hardware side, earlier approaches either use rigid-driven mechanisms, which provide precise control but are difficult to design within constrained spaces, or tendon-driven mechanisms, which are more space-efficient but challenging to control. In contrast, we propose a hybrid actuation approach that combines the best of both worlds. The eyes and mouth—key areas for emotional expression—are controlled using rigid mechanisms for precise movement, while the nose and cheek, which convey subtle facial microexpressions, are driven by strings. This design allows us to build a compact yet versatile hardware platform capable of expressing a wide range of emotions. On the algorithmic side, our method introduces a self-modeling network that maps motor actions to facial landmarks, allowing us to automatically establish the relationship between blendshape coefficients for different facial expressions and the corresponding motor control signals through gradient backpropagation. We then train a neural network to map speech input to corresponding blendshape controls. With our method, we can generate distinct emotional expressions such as happiness, fear, disgust, and anger, from any given sentence, each with nuanced, emotion specific control signals—a feature that has not been demonstrated in earlier systems. We release the hardware design and code at https://github.com/ZZongzheng0918/Morpheus-Hardware and https://github.com/ZZongzheng0918/Morpheus-Software.



input


Audio-to-lip Synchronization



Speech-driven Expression Results










Facial Detection



Citation



        @article{Morpheus,
          title={Morpheus: A Neural-driven Animatronic Face with Hybrid Actuation and Diverse Emotion Control},
          author={Zongzheng Zhang and Jiawen Yang and Ziqiao Peng and Meng Yang and Jianzhu Ma and Lin Cheng and Huazhe Xu and Hang Zhao and Hao Zhao},
          journal={Robotics: Science and Systems (RSS)},
          year={2025}
        }


Contact


If you have any questions, please feel free to contact Hao Zhao at zhaohao@air.tsinghua.edu.cn.