Affective Computing has been a hot topic in recent years with much of the research in this area tending to focus on how we can give computers emotions and how computers can autonomously detect our emotional states and then adapt themselves accordingly. We are taking a different focus in that we are concentrating on people's psychological responses to agent entities that exhibit emotion (a research area that has been largely neglected in comparison). For example, is a synthetic smile the same as a human smile? How do we respond to synthetic displays of joy, happiness, sadness, frustration, fear and anger? Can we catch emotions from computers?
We have built a virtual human that simulates the role of a human health professional through making use of many of the skills, strategies and techniques that human health professionals often use when attempting to help people change problematic behaviour. We are currently using this agent to investigate how people respond to synthetic displays of emotion and to examine whether such an agent can help motivate people to enhance their diet.
One area that we are particulary interested in is that of how we respond to affective agent entities over multiple and extended periods of interaction. Most related studies in this area tend to conduct their experiments over a single interaction typically lasting less than an hour. However, if we are to interact with such agents in the future, it is likely that we will do so over multiple interactions spanning days, weeks, months, and potentially years. As result, more research studies are required which examine how our perceptions of affective agents change over numerous interactions and time.
(See also the Natural Language Processing group for some other work on affective computing.)