TY - GEN
T1 - Facial expression analysis using nonlinear decomposable generative models
AU - Lee, Chan Su
AU - Elgammal, Ahmed
PY - 2005
Y1 - 2005
N2 - We present a new framework to represent and analyze dynamic facial motions using a decomposable generative model. In this paper, we consider facial expressions which lie on a one dimensional closed manifold, i.e., start from some configuration and coming back to the same configuration, while there are other sources of variability such as different classes of expression, and different people, etc., all of which are needed to be parameterized. The learned model supports tasks such as facial expression recognition, person identification, and synthesis. We aim to learn a generative model that can generate different dynamic facial appearances for different people and for different expressions. Given a single image or a sequence of images, we can use the model to solve for the temporal embedding, expression type and person identification parameters. As a result we can directly infer intensity of facial expression, expression type, and person identity from the visual input. The model can successfully be used to recognize expressions performed by different people never seen during training. We show experiment results for applying the framework for simultaneous face and facial expression recognition.
AB - We present a new framework to represent and analyze dynamic facial motions using a decomposable generative model. In this paper, we consider facial expressions which lie on a one dimensional closed manifold, i.e., start from some configuration and coming back to the same configuration, while there are other sources of variability such as different classes of expression, and different people, etc., all of which are needed to be parameterized. The learned model supports tasks such as facial expression recognition, person identification, and synthesis. We aim to learn a generative model that can generate different dynamic facial appearances for different people and for different expressions. Given a single image or a sequence of images, we can use the model to solve for the temporal embedding, expression type and person identification parameters. As a result we can directly infer intensity of facial expression, expression type, and person identity from the visual input. The model can successfully be used to recognize expressions performed by different people never seen during training. We show experiment results for applying the framework for simultaneous face and facial expression recognition.
UR - http://www.scopus.com/inward/record.url?scp=33646387151&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33646387151&partnerID=8YFLogxK
U2 - https://doi.org/10.1007/11564386_3
DO - https://doi.org/10.1007/11564386_3
M3 - Conference contribution
SN - 3540292292
SN - 9783540292296
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 17
EP - 31
BT - Analysis and Modelling of Faces and Gestures - Second International Workshop, AMFG 2005, Proceedings
PB - Springer Verlag
T2 - 2nd International Workshop on Analysis and Modelling of Faces and Gestures, AMFG 2005
Y2 - 16 October 2005 through 16 October 2005
ER -