My vision is to build computer systems that can have fluent conversations with humans using English, or another human language, as well as gestures and other non-verbal behaviors.
To have a fluent conversation, the system needs to be aware of the context in which the dialog is taking place. This context includes the human dialog partner, the previous dialog, as well as the surrounding environment. The system should use appropriate linguistic and non-linguistic means to relate to the context. For example, it needs to be able to refer to objects in the environment, and it needs to react to what the user says and does as well as to other changes in the context.
In recent projects, I have, for example, worked on a system that automatically generates English instructions to help a human user solve a task in a 3D virtual environment, or I have helped build an animated figure that can automatically generate the words and gestures to give walking directions across a college campus. My research involves computational modeling and implementation as well as studies of human communication.
ANTE: A Four-Tier Framework to Boost Visual Literacy for High Dimensional Data
The goal of this project is to develop a system that can present complex, multi-dimensional data in such a way that novice users can make sense of them. The system will automatically generate narratives from data which present relationships in the data through a sequence of visualizations accompanied by natural language text.