Ph.D. University of Southern California. Electrical Engineering. 2010.
M.S. University of Southern California. Electrical Engineering. 2007.
B.S. Tufts University. Electrical Engineering. 2004.
Emotion has intrigued researchers for generations. This fascination has permeated the engineering community, motivating the development of affective computational models for classification. However, human emotion remains notoriously difficult to interpret both because of the mismatch between the emotional cue generation (the speaker) and cue perception (the observer) processes and because of the presence of complex emotions, emotions that contain shades of multiple affective classes. Proper representations of emotion would ameliorate this problem by introducing multidimensional characterizations of the data that permit the quantification and description of the varied affective components of each utterance. Currently, the mathematical representation of emotion is an area that is underexplored.
Research in emotion expression and perception provides a complex and human-centered platform for the integration of machine learning techniques and multimodal signal processing towards the design of interpretable data representations. The focus of this research is to provide a computational description of human emotion perception and combine this knowledge with the information gleaned from emotion classification experiments to develop a mathematical characterization capable of interpreting naturalistic expressions of emotion utilizing a data representation method called Emotion Profiles.
Emotion profiles (EPs) are a quantitative measure expressing the degree of the presence or absence of a set of basic emotions within an expression. They avoid the need for a hard-labeled assignment by instead providing a method for describing the shades of emotion present in an utterance. These profiles can be used to determine a most likely assignment for an utterance, to map out the evolution of the emotional tenor of an interaction, or to interpret utterances that have multiple affective components. The Emotion-Profile technique is able to accurately identify the emotion of utterances with definable ground truths (emotions with an evaluator consensus) and is able to interpret the affective content of emotions with ambiguous emotional content (no evaluator consensus), emotions that are typically discarded during classification tasks.
Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan, Emotion recognition using a hierarchical binary decision tree approach (2011), in: Speech Communication, 53:9-10(1162-1171).
Emily Mower, Maja J. Mataric and Shrikanth S. Narayanan, “A Framework for Automatic Human Emotion Classification Using Emotional Profiles,” IEEE Transactions on Audio, Speech and Language Processing, 19:5(1057-1070). May 2011.
Emily Mower, Maja Matarić, Shrikanth Narayanan. "Human Perception of Audio-Visual Synthetic Character Emotion Expression in the Presence of Ambiguous and Conflicting Information." IEEE Transactions on Multimedia. 11:5(843-855). August 2009.
Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh,Emily Mower, Samuel Kim, Jeannette Chang, Sungbok Lee, and Shrikanth Narayanan. "IEMOCAP: Interactive emotional dyadic motion capture database." Journal of Language Resources and Evaluation, 42:4(335-359). November 2008.
Michael Grimm, Kristian Kroschel, Emily Mower, and Shrikanth Narayanan. "Primitives based estimation and evaluation of emotions in speech." Speech Communication 49, 49:10-11(787-800). November 2007.