Development of affective human-computer interfaces (HCI) is of great interest in building future ubiquitous computing systems. We should be able to interact with computers in a natural way, similar to the human-human interaction. Computer vision and speech recognition will play a key role. The computer should be able to detect and identify the user, recognize user’s emotions, communicate easily by understanding speech and gestures, detect and track humans and recognize their actions, and provide a natural response based on its observation. Our research on image and video descriptors, facial image analysis, emotion recognition, gaze analysis, gesture analysis, visual speech analysis and synthesis, and 3D vision provide a unique basis for building affective HCI systems for various applications.
Read more
Selected References
Hannuksela J, Sangi P & Heikkilä J (2007) Vision-based motion estimation for interaction with mobile devices. Computer Vision and Image Understanding: Special Issue on Vision for Human-Computer Interaction, 108(1-2):188-195.
Bordallo López M, Hannuksela J, Silvén O & Fan L (2012) Head-tracking virtual 3-D display for mobile devices. Proc. Computer Vision and Pattern Recognition Workshops (CVPRW), 27-34.
Röning J, Holappa J, Kellokumpu V, Tikanmäki A & Pietikäinen M (2014) Minotaurus: A system for affective human-robot interaction in smart environments. Cognitive Computation 6(4):940-953.
Last updated: 22.8.2016