This project aims to investigate human capacity to decode the communication delivered via body action in virtual human representations (avatars/agents). Previously, IMSC UCS research has examined the factors that contribute to successful detection, by human users, of emotional facial expressions conveyed by avatars actuated using Performance Driven Facial Animation. The current work has expanded to investigate similar issues for full body non-verbal expression of emotion.
The knowledge gained from these projects will drive the integration of UCS methodologies in the design, development and evaluation of IMS that incorporates gestural interaction with both standard magnetic tracking and vision-based systems. This work could also have wider generalizable value for creating the enabling technology required for development of multimodal and perceptual user interfaces (PUIs). The goal here is to supplement current human-computer input modalities, such as mouse, keyboard, or cumbersome tracking devices and data gloves, with multimodal interaction and PUI approaches to produce an immersive, coupled visual and audio environment. The ultimate interface may be one which leverages human natural abilities, as well as our tendency to interact with technology in a social manner, in order to model humancomputer interaction after human-human interaction. Recent discussions on the incremental value of PUIs over traditional GUIs suggest that more sophisticated forms of bi-directional interaction between a computational device and the human user has the potential to produce a more naturalistic engagement between these two complex "systems". This enhanced engagement is anticipated to lead to better HCI for a wider range of users across age, gender, ability levels and across media types.
NSF Report (Year 7)
NSF Report (Year 8)
Poster