Applying Artificial Intelligence Techniques for Creating Rich Interactive Avatars
Mei Si
RPI
January 17, 2013
12:50-1:50
Olin 107
Lunch will be served at noon in Olin 107.

Abstract

Storytelling is an important aspect of the human experience.With the rapid advancements of computer technologies in recent years, virtual environments are becoming increasingly capable of providing a vivid, fictional world within which users can immerse and interact with other characters either controlled by other users or an AI system. In recent years, games that emphasize the social and narrative aspects of the player's experience have become increasingly popular. This is evidenced in the recent major titles such as Heavy Rain, Mass Effect and Bioshock. Game designers have been looking into ways to use rich characters and narratives to engage the player and to provide the central experience of the game. Interactive digital avatars have also been widely used for training and pedagogical purposes, ranging from math and physics tutoring, to language and social skill training, and from life style suggestions to PTSD and Autism interventions.

The design of interactive avatars in such narrative-rich environments faces many challenges. Different from traditional linear narratives, in which only a single story path is presented to the audience, the support for user interactivity creates many alternative paths through a story, and many variations of the interactions. To account for all the contingencies, and make sure the characters behave appropriately in them is extremely time-consuming. As a result, the designers often have to sacrifice interactivity for ensuring a satisfying experience.

The talk presents two AI systems for automating the design of interactive avatars. These systems will be presented with case studies of the author’s past and current projects so that the audience can get a better idea of how they may be applied to different authoring scenarios.

The first system is an agent-based framework – Thespian – for authoring and simulating interactive narratives [3,4,5], in which the user can take role and interact with other characters controlled by AI agents. Thespian models characters in a story as decision-theoretic goal-based agents with Theory of Mind. These agents are capable of understanding social norms and social emotions. They can project into the future and estimate other characters including a human user’s reactions to its potential movements and make a decision based on the projections. Using human-like characters alone does not guarantee an inspirational or dramatic narrative experience. Thespian also contains an automated drama/story management system which coordinates the avatar/agents in real time to direct the interaction towards the author’s desired pedagogical or dramatic effects. Finally, Thespian provides automated means for configuring and testing virtual characters, and thus supports fast development of interactive avatars in the face of open-ended user interaction.

The second AI system is designed specifically for creating interactive storytelling experience [1, 2]. In this case, the AI agent tells a story to the user. The user can comment and ask questions. However, his/her actions can’t affect the development of the story. Nevertheless, storytelling is interactive. A good storyteller observes the listener's responses and adjusts the emphasis of his/her storytelling accordingly. This system is aimed at achievingthe same effect through using an automated AI agent. The agent has a model of the story and profiles of user interest. When the user makes comments or asks questions about the story, the information is used for updating the profiles. Based on the estimated user interest, the storytelling agent fine tunes the content of the story in real time. Because the user does not directly interact with the characters in the story, the system is created as a light weighted system and does not contain extensive models of the characters. We will use the telling of a Chinese story – the Painted Skin – as the example domain for this system.

Finally, in this talk I will discuss AI techniques for automatically creating facial expressions, gestures and body languages for the avatars. Animations are traditionally done manually by human animators and is extremely time consuming. I will present the techniques that have been applied for creating the avatars in our past projects and discuss our motivation for adopting them.

References:

[1] Barron, M. and Si, M. Towards Interest And Engagement, A Framework For Adaptive Storytelling. In Proceedings of the 5th Workshop on Intelligent Narrative Technologies (INT) Co-located with 8th AAAI Conference on Artificial Intelligence in Interactive Digital Entertainment (AIIDE), Palo Alto, California. 2012.

[2] Chang, B., Sheldon, L. and Si, M. Foreign language learning in immersive virtual environments. In Proceedings of IS&T/SPIE Electronic Imaging, Burlingame, CA, 2012.

[3] Si, M. and Marsella, S.C. Modeling Rich Characters in Interactive Narrative Games. In Proceedings of GAMEON-ASIA, Shanghai, China, 2010.

[4] Si, M., Marsella, S.C., Pynadath, D.V. Directorial Control in a Decision-Theoretic Framework for Interactive Narrative. In Proceedings of International Conference on Interactive Digital Storytelling (ICIDS), Guimarães, Portugal, 2009. Best Paper Award

[5] Si, M., Marsella, S.C., and Pynadath, D.V. Thespian: Using Multi-Agent Fitting to Craft Interactive Drama. In Proceedings of International Conference on Autonomous Agents and Multi Agent Systems (AAMAS), page 21-28, Utrecht, The Netherlands, 2005.

Bio:

Mei Si is an assistant professor in the cognitive science department, Rensselaer Polytechnic Institute (RPI). Sheis also part of the Games and Simulation Arts and Sciences (GSAS) Program at RPI.

Mei Si received a Ph.D. in Computer Science from the University of Southern California and a M.A. in Psychology from the University of Cincinnati. Her primary research interest is virtual or mixed realities for games, training and health interventions. She is interested in using AI technologies to make virtual environments more engaging and more effective. Mei has more than seven years of experience developing virtual environments and intelligent conversational agents for serious games. Mei has worked on the Tactical Language Training System, which is a large-scale (six to twelve scenes each for three languages -- Lebanese Arabic, Iraqi and Pashto) award-winning project (won the 2007 DARWARD award) funded by US military for rapid language and culture training. The system has been used by thousands of military personnel. She has also worked on – SAFE – a NIMH funded project for HIV intervention using interactive avatars.

Mei Si is also interested in studying how user interaction and user interface should be designed to make the learning experiences more effective. She is working on developing pervasive user interface that can detect the user’s facial expressions, gestures and emotions during the interaction in a non-invasive fashion, and providing feedbacks in addition to visual display and sound, such as haptic feedback. She has also been exploring using cognitive robots to physically embody the characters and augment the interactive experience.


Please email Kristina Striegnitz (striegnk@union.edu) if you have any questions concerning the seminar series or if you would like to receive the seminar announcements by email.