A Unified and Interpretable Emotion Representation and Expression Generation

Reni Paskaleva3*, Mykyta Holubakha1, Andela Ilic2, Saman Motamed1, Luc Van Gool1,2, Danda Paudel1
1INSAIT, Sofia University, Bulgaria, 2ETH Zurich, Switzerland,
3First Private Mathematical High School, Sofia, Bulgaria
Published at CVPR 2024

*As part of INSAIT internship
UNDER CONSTRUCTION diagram explaining the training process

Abstract

Canonical emotions, such as happy, sad, and fearful, are easy to understand and annotate. However, emotions are often compound, e.g. happily surprised, and can be mapped to the action units (AUs) used for expressing emotions, and trivially to the canonical ones. Intuitively, emotions are continuous as represented by the arousal-valence (AV) model. An interpretable unification of these four modalities —namely, Canonical, Compound, AUs, and AV— is highly desirable, for a better representation and understanding of emotions. However, such unification remains to be unknown in the current literature. In this work, we propose an interpretable and unified emotion model, referred as C2A2. We also develop a method that leverages labels of the non-unified models to annotate the novel unified one. Finally, we modify the text-conditional diffusion models to understand continuous numbers, which are then used to generate continuous expressions using our unified emotion model. Through quantitative and qualitative experiments, we show that our generated images are rich and capture subtle expressions. Our work allows a fine-grained generation of expressions in conjunction with other textual inputs and offers a new label space for emotions at the same time.

BibTeX


@inproceedings{emotiondiffusion,
  title={A Unified and Interpretable Emotion Representation and Expression Generation},
  author={Reni Paskaleva, Mykyta Holubakha, Andela Ilic, Saman Motamed, Luc Van Gool, Danda Paudel},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024}
}