Director

Roger Levy

Roger Levy

My research focuses on theoretical and applied questions in the processing of natural language. Inherently, linguistic communication involves the resolution of uncertainty over a potentially unbounded set of possible signals and meanings. How can a fixed set of knowledge and resources be deployed to manage this uncertainty? To address these questions I use a combination of computational modelling and psycholinguistic experimentation. This work furthers our understanding of the cognitive underpinning of language processing, and helps us design models and algorithms that will allow machines to process human language.

Graduate Students

Klinton Bicknell

Klinton Bicknell

My research focuses mostly on two questions about human sentence processing. (1) What are the sources of information that are available to the human sentence processor and how quickly are these information sources used? (2) How are these various sources combined, and how is conflict between them resolved? To answer these questions, I make use of a broad range of methodologies, including behavioral paradigms, neuropsysiological recordings, and computational modeling.

Rebecca Colavin

Rebecca Colavin

My main area of interest is computational phonology. Currently, I am interested in phonotactics, the set of language specific rules that determine the acceptability of sound sequences. In particular, I am interested in the nature of the phonotactic grammar and the relationship between lexical frequency and gradient speaker judgments.

Gabe Doyle

Gabriel Doyle

I'm intrigued by a trio of psycholinguistic questions: what information do speakers know about their language, how do they learn this information, and how do they use it? I investigate these questions with a trifecta of models, building mixed-effects regression models to uncover the factors that drive speaker choice, Bayesian models for artificial-language learning experiments, and topic models for anything I can find.

Albert Park

Albert Park

I am currently focusing on the problem of natural language parsing using probabilistic methods. I am intrigued by the capacity that people have to use language, and believe that to replicate these capabilities in machines we will need to create models based on the way human brains process language.

Nathaniel Smith

I am interested in how people manage to deploy language — a fantastically complicated system — in the real world — an even more complex environment, and one with radically different structure. Recently, I've been studying probabilistic models as a potential piece of the mechanism linking these domains.

Collaborators

Galen Andrew, Microsoft Research

Sarah Bunin Benor, Hebrew Union College

Sarah Creel, UCSD

Hal Daumé III, Maryland

Charles Elkan, UC San Diego

Jeff Elman, UC San Diego

Evelina Fedorenko, MIT

Victor Ferreira, UC San Diego

Ted Gibson, MIT

Noah Goodman, Stanford

Tom Griffiths, UC Berkeley

Mary Hare, Bowling Green State University

T. Florian Jaeger, Rochester

Andy Kehler, UC San Diego

Frank Keller, University of Edinburgh

Marta Kutas, UC San Diego

Chris Manning, Stanford

Ken McRae, University of Western Ontario

Keith Rayner, UC San Diego

Florencia Reali, UC Berkeley

Hannah Rohde, Edinburgh

Tim Slattery, University of South Alabama

Tom Wasow, Stanford