An introduction to spoken dialog systems, and some recent developments at AT&T
Jason Williams
AT&T Labs Research
November 6, 2008
12:50-1:50
Abstract

Spoken dialog systems interact with people using spoken language to help them do something, like control the music in a car, or call a phone number stored in a cellphone. Building these systems is deceptively difficult because speech recognition technology introduces recognition errors, and because users behave in unexpected ways. As a result, the dialog system can never be certain of what the user really wants, yet must make progress toward completing the user's goal over the course of the dialog. These properties make this an interesting application of research in artificial intelligence, machine learning, human-computer interaction, and computer science generally.

In this talk I'll first introduce spoken dialog systems and explain why building them is hard. Then, I'll talk about some recent advances at AT&T Labs - Research, where we have been applying "partially observable Markov decision processes" (POMDPs) to this problem. POMDPs maintain a probability distribution over many different dialog hypotheses, and then choose system actions with reinforcement learning. Experimental results confirm that together these techniques help provide robustness to speech recognition errors. This talk will include a few demonstrations that illustrate the challenges and methods we have been developing.

Jason is a researcher at AT&T Labs Research. He received his PhD from Cambridge University in 2006. (See his homepage.)

Lunch will be provided at noon.

Please email Kristina Striegnitz (striegnk@union.edu) if you have any questions concerning the seminar series or if you would like to receive the seminar announcements by email.