11th International Conference on Text, Speech and Dialogue
TSD 2008, Brno, Czech Republic, September 8–12 2008
 
Topics | Committees | Important Dates | Contact info | Call for Participation
TSD 2008 Keynote Speakers

Jerry Hobbs

University of Southern California / Information Sciences Institute
4676 Admiralty Way
CA 90292-6695 Marina del Rey
USA

hobbs@isi.edu

Keynote topic - Deep Lexical Semantics

Abstract:

The link between words and the world is made easier if we have conceptualized the world in a way that language indicates. In the effort I will describe, we have constructed a number of core formal theories, trying to capture the abstract structure that underlies language and enable literal and metaphorical readings to be seen as specializations of the abstract structures. In the core theories, we have axiomatized composite entities (or things made out of other things), the figure-ground relation, scalar notions (of which space, time and number are specializations), change of state, causality, and the structure of complex events and processes. These theories explicate the basic predicates in terms of which the most common word senses need to be defined or characterized. We are now encoding axioms that link the word senses to the core theories, focusing on 450 of words senses in Core WordNet that are primarily concerned with events and their structure. This may be thought of as a kind of ``advanced lexical decomposition", where the ``primitives" into which words are ``decomposed" are elements in coherently worked-out theories.

Jerry Hobbs's Biography

Dr. Jerry R. Hobbs is a prominent researcher in the fields of computational linguistics, discourse analysis, and artificial intelligence. He earned his doctor's degree from New York University in 1974 in computer science. He has taught at Yale University and the City University of New York. From 1977 to 2002 he was with the Artificial Intelligence Center at SRI International, Menlo Park, California, where he was a principal scientist and program director of the Natural Language Program. He has written numerous papers in the areas of parsing, syntax, semantic interpretation, information extraction, knowledge representation, encoding commonsense knowledge, discourse analysis, the structure of conversation, and the Semantic Web. He is the author of the book "Literature and Cognition", and was also editor of the book "Formal Theories of the Commonsense World". He led SRI's text-understanding research, and directed the development of the abduction-based TACITUS system for text understanding, and the FASTUS system for rapid extraction of information from text based on finite-state automata. The latter system constituted the basis for an SRI spinoff, Discern Communications. In September 2002 he took a position as senior computer scientist and research professor at the Information Sciences Institute, University of Southern California. He has been a consulting professor with the Linguistics Department and the Symbolic Systems Program at Stanford University. He has served as general editor of the Ablex Series on Artificial Intelligence. He is a past president of the Association for Computational Linguistics, and is a Fellow of the American Association for Artificial Intelligence. In January 2003 he was awarded an honorary Doctorate of Philosophy from the University of Uppsala, Sweden.




Elizabeth Shriberg

Speech Technology & Research Laboratory
SRI International
333 Ravenswood Avenue
Menlo Park, CA 94025, USA

and

International Computer Science Institute
1947 Center Street, Suite 600
Berkeley, CA 94704

ees@speech.sri.com, ees@icsi.berkeley.edu

Keynote topic - Challenges to Using Prosody in Automatic Language Processing

Abstract:

Spoken language is an ever more important input modality for automatic language processing tasks. However, much work involving speech as input still uses only the words -- failing to harness prosodic information that humans use for many communicative functions. This talk examines ten common reasons for not modeling prosody in natural language processing, and argues that it is a ripe time to reconsider. Arguments are based on a generalized framework for feature extraction and development, familiar machine learning techniques for modeling, and successes in a number of applications (e.g. sentence segmentation, emotion, speaker classification) relevant to long-term goals in the processing of spoken language.

Elizabeth Shriberg's Biography

Elizabeth Shriberg is a Senior Researcher in the speech groups at both SRI International and the International Computer Science Institute, and an Adjunct Associate Professor at CASL, U. Maryland. She received a Ph.D. in Cognitive Psychology from U.C. Berkeley (1994) and was an NSF-NATO postdoc at IPO (the Netherlands, 1995). Her main interest is spontaneous speech. Her work aims to combine linguistic knowledge with corpora and techniques from speech and speaker recognition, to advance both scientific understanding and recognition technology. Over the last decade she has led projects on modeling disfluencies, punctuation, dialog, emotion, and speakers, using lexical and prosodic features. She has published over 150 journal and conference papers in speech science, speech technology, and related fields. She serves as an Associate Editor of Language and Speech, on the boards of Speech Communication and other journals, on the ISCA Advisory Council, and on the ICSLP Permanent Council.




Graeme Hirst

Department of Computer Science
University of Toronto
Toronto, Ontario
CANADA M5S 3G4

gh@cs.toronto.edu

Keynote topic - The Future of Text-Meaning in Computational Linguistics

Abstract:

Writer-based and reader-based views of text-meaning are reflected by the respective questions "What is the author trying to tell me?" and "What does this text mean to me personally?" Contemporary computational linguistics, however, generally takes neither view; applications do not attempt to answer either question. Instead, a text is regarded as an object that is independent of, or detached from, its author or provenance, and as an object that has the same meaning for all readers. This is not adequate, however, for the further development of sophisticated NLP applications for intelligence gathering and question answering. I will discuss different views of text-meaning from the perspective of the needs of computational text analysis, and then extend the analysis to include discourse as well -- in particular, the collaborative construction of meaning and the collaborative repair of misunderstanding.

Graeme Hirst's Biography

Research in computational linguistics:

Our research in computational linguistics emphasizes issues in lexical semantics, pragmatics, and the social use of language that arise when the methods of computational linguistics are applied to real-world language and real-world problems. The ultimate goal of our research is the development of better computational models of language for use in human--computer interaction and in applications such as information retrieval, text analysis, and machine translation. Two applications that have been especially important in our work are intelligent correction of real-word spelling errors and intelligent linguistic assistance to disabled users.

Several themes underlie the approaches that we take. First, we are concerned with fine-grained nuances of language, as it is really used in the world. Second, there is an emphasis on problems of representation of linguistic and semantic knowledge. Third, the approaches taken are inherently interdisciplinary; the work draws on research in psycholinguistics, philosophy, theoretical linguistics, and sociology. This orientation is particularly suited for research in such applications as machine translation and, more generally, in generation systems in which precision in language is important; in advice-giving systems; in processing long documents for conceptual retrieval; in intelligent tools for writers; and in knowledge acquisition by reading.






.
TSD 2007 | TSD 2006 | TSD 2005