University of Rochester

Colloquia Series

2017


September 11, 2017
Hitting a point and wiping a region: The argument realization of manner verbs

Lattimore 513

Beth Levin
Department of Linguistics, Stanford University

As Fillmore and others have observed, verbs with similar meanings often show characteristic argument realization patterns, that is, shared patterns of morphosyntactic distribution. This observation has suggested that these patterns follow from common facets of meaning (Fillmore 1971, Levin & Rappaport Hovav 1995, Pinker 1989), attributed largely to the verb's 'root'. This proposal is challenged by observations that verbs actually are found in a wide variety of syntactic contexts, suggesting that they can simply be inserted into any syntactic context and their roots do not have a 'say' in the matter. On this approach, unacceptable root-syntactic context combinations are ruled out due to an incompatibility between the two (Acedo-Matellan & Mateu 2013, Borer 2003, Goldberg 1995, Hoekstra 1992, Mateu & Acedo-Matellan 2012). Such incompatibilities are often explained by appeal to real world knowledge, but details remain to be fleshed out.

I revisit this challenge in the context of recent work on the semantic underpinnings of argument realization. I acknowledge that the empirical landscape is more complex than studies of argument realization in the '90s assume, but I show that verbs nevertheless display significant semantic class specific distributional patterns. I take these patterns as a reason to still pursue an account where the verb's root contributes to determining its argument realization options.

First, I review the well-known, systematic asymmetries that involve what have been called manner vs. result verbs, exemplified by hit and break, respectively (Rappaport Hovav & Levin 2010). Then, I turn to less well-known, but equally systematic asymmetries between two types of manner verbs represented by the verbs hit and wipe. The break/hit asymmetries have been used to support the proposal that roots come with a grammatically relevant ontological type. I further argue that some manner roots select for an 'argument' (cf. the 'constant participant' of Levin 1999), and that hit and wipe impose different demands on such an argument. Informally, wipe requires it to be a 'region' and hit a 'point'.

I propose that the distribution of roots and, hence, verbs, across syntactic contexts is determined by a cluster of interacting factors, including the ontological type of the root. The diversity of syntactic contexts that many verbs are found in can largely be attributed to the expression of three major types of events of scalar change (Hay, Kennedy & Levin 1999). Further, as suggested in RH&L (2010), the argument that a scalar change is predicated of must be realized as an object. As RH&L discuss, this requirement is the source of distributional differences between break vs. hit/wipe. I argue that further distributional differences reflect the nature of the scalar change involved, especially among the hit/wipe verbs. I trace the differential syntactic behavior of wipe and hit to the distinct types of 'argument' their roots require, which in turn results in wipe, but not hit, having an object which is a potential incremental theme. Finally, I consider which facets of world knowledge might further constrain the attested argument realization options from among those that the more 'grammatical' c



May 9, 2017
Probabilistic prosody: Context effects and perceptual recovery of (supra)segmental linguistic information

301B Meliora Hall

Laura Dilley, Ph.D
Associate Professor
Michigan State University

It has been proposed that the brain is a complex prediction engine which attempts to minimize prediction error through adaptive recapitulation of a signal source and comparison with incoming sensory information. Context effects are well-known in perception, but context effects due to prosody, i.e., rhythm, pitch, and timing, are relatively under-studied. In this talk I discuss how context prosody provides a strikingly robust basis for prediction of linguistic content, structure, and use in sometimes surprising ways. It is argued that predictions enabled by context prosody are crucial to understanding the speech chain from speaker to listener. Moreover, it is argued that examination of individual differences in sensitivity to context prosody can provide a window into mechanisms for language perception, including the extent to which mechanisms may be domain-specific, i.e., uniquely dedicated to processing language, as opposed to domain-general. The speech signal is often highly ambiguous and underdetermined with respect to phonetic and lexical content and structure, and context prosody imposed by the speaker is argued to be a critical piece to the puzzle for understanding how listeners develop accurate neural predictions about a speaker's intended message.



April 7, 2017
Symposium on American Indian Languages (SAIL)

Room 1829 & Alumni Room located in the Student Alumni Union (SAU/004) building, Rochester Institute of Technology

The Symposium on American Indian Languages (SAIL) is dedicated to discussion of the documentation, conservation and revitalization of the native languages of the Americas.

SAIL also provides a forum for the exchange of scholarly research on descriptive and/or theoretical linguistics focusing on American Indian languages.

SAIL brings together scholars, members of the indigenous communities, native speakers, educators and language activists who are interested in sharing experiences and best practices on topics related to language documentation, conservation and revitalization.

Building on RIT's rich history of educational outreach to Native American communities, SAIL welcomes the active participation of indigenous communities, native language speakers, and those interested in revitalization and preservation of their heritage languages and cultures.

The theme for this year's SAIL is “Language Revitalization Strategies in the Americas: Challenges, Success and Pitfalls”.

Visit the SAIL website for more details.



April 5, 2017
The grammaticalization of the iterative marker -n?- in Navajo

Lattimore 513

Jalon Begay
Navajo Language Program & Department of Linguistics University of New Mexico

The complex and puzzling nature of the Athabaskan verb has challenged and fascinated scholars for more than a century. The verbal morphology has been described as having unpredictable inflectional and derivational prefixes that are motivated by ?nonlocalized? dependencies within a larger templatic composition (Rice 2000: 1, 9). When observed synchronically, the unpredictability and irregularity cannot be described as a linear concatenation of affixes to a verb stem (root + aspectual suffixes). What we find are a wide and varied range of fixed, discontinuous sets of prefixal strings that combine with optional prefixes, which seem to be inserted as required. From a synchronic perspective, the elements basically undermine any fruitful analyses that stipulate syntactic derivation and grammatical or semantic scope (cf. Mithun 2000: 236). This paper attempts to reconcile the synchronic facts of the templatic morphology with language change processes that are well known in grammaticalization theory (see, e.g. Hopper & Traugott 2003[1993]; Lehmann 2015). In particular, the analysis focuses on the Navajo iterative marker -n?-. Navajo is known for complex allomorphy and homophony found among inflectional and derivational prefixes. It is often assumed and noted that such morphology are unrelated, coincidences arrived at via phonological processes (Kari 1989). However, with a closer inspection, one will find many examples of semantic extension and radially structured categories (e.g. Lakoff 1987; Panther and Thornburg 2001), as shown in (1).

(1)

a. Iterative aspect
T??? ??kw??b?n? gohw??h n?shdlįį́ h́
. t??? ??kw??-b?n? gohw??h n?-?-sh-d-lįį́h́
PART every-morning coffee ITER-3OBJ-1SUBJ-VL-drink.USIT
?I (usually) drink coffee every morning.?

b. Semeliterative aspect
N??sh?dl??zh.
n?-?-si-sh-?-dl??zh
SEM-3OBJ-ASP-1SUBJ-VL-paint.PERF
?I painted it again.? (or ?I repainted it.?)

c. Reversionary aspect
Hooghandi n?dz?.
hooghan-di n?-?-d-y?.
home-ENC REV-3SUBJ-VL-go.PERF
?S/he returned home.? (or ?S/he came back home.?)

Since the iterative marker ostensibly overlaps with other aspectual categories and lexical classes, I argue that -n?- is exemplary of grammaticalization pathways (cf. Heine et al. 1991) and what has been termed ?synchronic? grammaticalization (Robert 2004; cf. Craig 1991, for polygrammaticalization). Namely, I show the postpositional sources for the the aspectual phenomena in (1), e.g. -naa (~ naa- ~ na- ~ ne- ~ ni- ~ n-) ?around, in the surrounding? and/or -n? (~ n?- ~ n?- ~ n?- ~ ń-) ?around encircling?.

This study also proposes that much of the polysemous and homophonous forms of the Navajo verb complex can be accounted for by seeking out all probable sources and divergences (cf. Gaeta 2010). Usually, most apparent homophonous morphemes can be sourced back to monosyllabic nouns, verbal stems, or ?preverbal elements.? Lastly, the pathways can extend over several layers of grammaticalization processes. Therefore, commonly a lexical source and its perceptible derivatives (can) co-habitat within a single linguistic period.



February 28, 2017

301B Meliora Hall

Andres Buxo-Lugo
PhD student
University of Illinois at Urbana-Champaign



February 23, 2017

301B Meliora Hall

Eleanor Chodroff
Department of Cognitive Science, Johns Hopkins University



February 2, 2017
Mixed Effects Model Tutorial

301B Meliora Hall

Amelia Kimball
Department of Linguistics, University of Illinois at Urbana Champaign

Mixed effects models are widespread in language science because they allow researchers to incorporate participant and item effects into their regression. These models can be robust, useful and statistically valid when used appropriately. However, a mixed effects regression is implemented with an algorithm, which may not converge on a solution. When convergence fails, researchers may be forced to abandon a model that matches their theoretical assumptions in favor of a model that converges. We argue that the current state of the art of simplifying models in response to convergence errors is not based in good statistical practice, and show that this may lead to incorrect conclusions. We propose implementing mixed effects models in a Bayesian framework. We give examples of two studies in which the maximal mixed effects models justified by the design do not converge, but fully specified Bayesian models with weakly informative constraints do converge. We conclude that a Bayesian framework offers a practical--and, critically, a statistically valid--solution to the problem of convergence errors.



February 2, 2017
Categorical vs. Episodic Memory for Pitch Accents in American English

301B Meliora Hall

Amelia Kimball
Department of Linguistics, University of Illinois at Urbana Champaign

Phonological accounts of speech perception postulate that listeners map variable instances of speech to categorical features and remember only those categories. Other research maintains that listeners perceive and remember subcategorical phonetic detail. Our study probes memory to investigate the reality of categorical encoding for prosody—when listeners hear a pitch accent, what do they remember? Two types of prosodic variation are tested: phonological variation (presence vs. absence of a pitch accent), and variation in phonetic cues to pitch accent (F0 peak, word duration). We report results from six experiments that test memory for phonological pitch accent vs. phonetic cues. Our results suggest that listeners encode both categorical distinctions and phonetic detail in memory, but categorical distinctions are more reliably retrieved than cues in later tests of episodic memory. They also show that listeners may vary in the degree to which they remember prosodic detail.



January 23, 2017
Models of retrieval in sentence comprehension: A computational evaluation using Bayesian hierarchica

301B Meliora Hall

Bruno Nicenboim
PhD Graduate Student
Department of Linguistics of University of Potsdam, German

Research on similarity-based interference has provided extensive evidence that the formation of dependencies between non-adjacent words relies on a cue-based retrieval mechanism. There are two different models that can account for one of the main predictions of interference, i.e., a slowdown at a retrieval site, when several items share a feature associated with a retrieval cue: Lewis and Vasishth's (2005) activation-based model and McElree's (2000) direct access model. Even though these two models have been used almost interchangeably, they are based on different assumptions and predict differences in the relationship between reading times and response accuracy. The activation-based model follows the assumptions of the ACT-R framework, and its retrieval process behaves as a lognormal race between accumulators of evidence with a single variance. Under this model, accuracy of the retrieval is determined by the winner of the race and retrieval time by its rate of accumulation. In contrast, the direct access model assumes a model of memory where only the probability of retrieval can be affected, while the retrieval time is constant; in this model, differences in latencies are a by-product of the possibility of backtracking and repairing incorrect retrievals. We implemented both models in a Bayesian hierarchical framework in order to evaluate them and compare them. We show that some aspects of the data are better fit under the direct access model than under the activation-based model. We suggest that this finding does not rule out the possibility that retrieval may be behaving as a race model with assumptions that follow less closely the ones from the ACT-R framework. We show that by introducing a modification of the activation model, i.e, by assuming that the accumulation of evidence for retrieval of incorrect items is not only slower but noisier (i.e., different variances for the correct and incorrect items), the model can provide a fit as good as the one of the direct access model.


2016


December 9, 2016

Lattimore 513

Abby Cohn
Professor, Linguistics
Cornell University



December 2, 2016
The memory retrieval process in reflexive-antecedent dependency resolution

Lattimore 513

Zhong Chen
Assistant Professor, Department of Modern Languages & Cultures
RIT



October 6, 2016
Singing in tone languages: from mystery to research question(s)

Meliora 366

D. Robert Ladd
Professor, Linguistics and English Language
The University of Edinburgh

Singing in tone languages, a perennial source of mystery to speakers of non-tonal languages, has been the subject of a good deal of research since the turn of the century. This research shows that the solution to respecting both the linguistic (tonal) and musical functions of pitch crucially involves text-setting constraints. Specifically, in most of the dozen or more Asian and African tone languages where the question has been studied, the most important principle in maintaining the intelligibility of song texts seems to be the avoidance of what we might (hijacking a term from music theory) call "contrary motion": musical pitch movement up or down from one syllable to the next should not be the opposite of the linguistically specified pitch direction. I will review some of the empirical evidence for the basic constraint from recent research, and will discuss differences between languages and musical genres in such things as how strictly the constraint is observed. I will also briefly consider two more general issues: (1) how tonal text-setting might be incorporated into a general theory that includes traditional European metrics, and (2) what (if anything) the avoidance of contrary motion tells us about the phonological essence of tonal contrasts.



October 4, 2016
Forming wh-questions in Shona: A comparative Bantu perspective

Lattimore 513

Jason Zentz
Postdoctoral Associate
Yale University

Bantu languages, which are spoken throughout most of sub-Saharan Africa, permit wh-questions to be constructed in multiple ways, including wh-in-situ, full wh-movement, and partial wh-movement. Shona, a Bantu language spoken by about 13 million people in Zimbabwe and Mozambique, allows all three of these types. In my dissertation, which I report on here, I conducted the first in-depth examination of Shona wh-questions, exploring the derivational relationships among these strategies.

Wh-in-situ questions have received a wide variety of treatments in the syntactic literature, ranging from covert or disguised movement to postsyntactic binding of the wh-phrase by a silent question operator. In Bantu languages, wh-in-situ questions are often taken to be derived via a non-movement relation (e.g., Carstens 2005 for Kilega, Diercks 2010 for Lubukusu, Muriungi 2003 for K??tharaka, Sabel 2000 for Kikuyu and Duala, Sabel & Zeller 2006 for Zulu, Schneider-Zioga 2007 for Kinande), but alternatives have rarely been considered. I demonstrate how movement-based analyses that have been proposed for wh-in-situ in non-Bantu languages make the wrong predictions for Shona wh-in-situ, which lacks word order permutation, extraction marking, island effects, and intervention effects. These properties provide support for the traditional Bantuist view that the relation between the pronunciation site of an in-situ wh-phrase and its scopal position in the left periphery is not movement; I claim that in Shona it is unselective binding.

Full wh-movement in Shona gives rise to questions that bear a certain similarity to English wh-questions. However, using a range of diagnostics including extraction marking, island effects, reconstruction effects, and the distribution of temporal modifiers, I argue that what appears to be full wh-movement in Shona actually has a cleft structure: the wh-phrase moves to become the head of a relative clause, which is selected by a copula in the matrix clause. Just as in wh-in-situ, an ex-situ wh-phrase is pronounced lower than its scopal position, and the relation between these two positions is established via unselective binding. Additional evidence for this proposal comes from the sensitivity of partial wh-movement to island boundaries below but not above the pronunciation site of the wh-phrase, a pattern that has been predicted by previous analyses (e.g., Abels 2012, Sabel 2000, Sabel & Zeller 2006) but for which empirical support has been lacking until now. I therefore unify full and partial wh-movement under a single analysis for cleft-based wh-ex-situ that involves a step of relativization (independently needed for relative clauses) and a step of unselective binding (independently needed for wh-in-situ



September 30, 2016
The perception/production link at the individual and community levels: focusing on sound change

Lattimore 513

Andries Coetzee
Associate Professor of Linguistics
University of Michigan

This presentation reviews current research being conducted in the Phonetics Lab of the University of Michigan. Our Lab's research program focuses on community level variation in speech production and perception, and on how individual members of a community performs within the complex variable landscape of their speech community.

Since ongoing sound changes are characterized by variability, understanding the structure of variation, and in particular the relation between perception and production in individual members of a speech community, can shed light on how sound changes are initiated and how they progress through a speech community. Do perception and production norms change together, or are they partially independent such that change in the one can lead change in the other? If they change separately, which is more likely to change first? Are individuals who produce innovative forms also more likely to rely on the innovative cues in perception?

To investigate these questions, this presentation will focus on the results of a study on the ongoing process of tonogenesis in Afrikaans. In Afrikaans, the historical distinction between voiced and voiceless plosives is currently being replaced by a distinction between high and low tone on neighboring vowels. This presentation will show how this change is realized in the speech community, with particular focus on the relation between perception and production norms in individual members of the community. The presentation will end with a brief review of a currently ongoing study that uses eye-tracking technology and airflow measures to investigate the relationship between the perception and production of anticipatory nasalization in English ('sent' produced with a nasal vowel). The implications of these studies for theories about the cognitive representation of speech and theories of sound change will be considered.



May 11, 2016
Does Predictability Affect Reference Form? It depends on the verb

Kresge Room, Meliora 269

Jennifer E. Arnold
Professor, Department of Psychology and Neuroscience
University of North Carolina, Chapel Hill

The structure of events appears to influence the way people talk about them. In some cases (see ex. 1), event roles have a much higher tendency to be mentioned again - that is, they are predictable. In emotion verbs like (1), Sandy is considered the expected cause of the scaring/fearing events, and is more likely to be mentioned again (Fukumura & van Gompel, 2010; Hartshorne et al., 2015; Kehler et al., 2008). In (2), Kathryn is the goal of the transfer event, and is expected to participate in the next event, thus making her more likely to be mentioned (Stevenson et al., 1994). Critically, these biases depend on the relation between the two clauses, where the implicit causality effects in (1) are supported by a causal continuation, and the goal bias in (2) is supported by a next-mention continuation.

1a. Sandy scared Kathryn because? 2a. Sandy threw the ball to Kathryn. Then?
1b. Kathryn feared Sandy because? 2b. Kathryn caught the ball from Sandy. Then?

A debated question is whether thematic role predictability affects the use of reduced referential expressions, like pronouns. Sentence-completion experiments have yielded conflicting data, with some authors arguing that pronouns are more common for predictable referents (Arnold, 2011), while others presenting data that thematic roles have no effect on pronoun use (Fukumura & van Gompel, 2010; Kehler et al., 2008).

I present the results of a series of studies, which examined this question in detail. We designed a novel story-telling task, in which participants heard a description of one panel, and provided an oral description of the second panel (see Fig. 1).

Participant hears:
?The butler gave a fur coat to the maid? OR ?The maid received a fur coat from the butler.? Response: {The butler / He?}

In experiments examining goal-source verbs, we found strong support for the hypothesis that thematic role does influence referential form. However, experiments examining emotion verbs presented mixed results. A corpus analysis suggests that these verb types may differ in the way they are used in discourse, affecting both the perceived predictability of discourse entities, and their relationship to discourse accessibility.

  • Arnold, J.E. (2001). The effect of thematic roles on pronoun use and frequency of reference continuation. Discourse Processing, 31(2), 137-162.
  • Fukumura, K. & van Gompel, R. P. G. (2010). Choosing anaphoric expressions: Journal of Memory and Language. 62, 52-66.
  • Hartshorne, J.K., O'Donnell, T.J., & Tenenbaum, J.B. (2015). The causes and consequences explicit in verbs. LCP, 30:6, 716-734.
  • Kehler, A., Kertz, L., Rohde, H. & Elman, J., (2008). Coherence and coreference revisited. Journal of Semantics, 25, 1-44.
  • Rosa, E. C., & Arnold, J. E. (under review). Predictability affects production: Thematic roles affect reference form selection. UNC Chapel Hill.
  • Stevenson, R., Crawley, R., & Kleinman, D. (1994). Thematic roles, focusing and the representation of events. LCP, 9, 519-548.


May 6, 2016
TBA

513 Lattimore Hall

Zhong Chen
Assistant Professor
Department of Modern Languages, Rochester Institute of Technology



April 29, 2016
What does syntax have to do with island effects?

513 Lattimore Hall

Rui Chaves
Associate Professor
Department of Linguistics, University at Buffao - SUNY



April 22, 2016
TBA

Meliora 366

Jill Warker
Department of Psychology, University of Scranton



April 8, 2016
TBA

513 Lattimore Hall

Jim Wood
Postdoctoral Associate
Department of Linguistics, Yale University



March 15, 2016
Coreference and the context of alternatives

Meliora 366

Hannah Rohde
University of Edinburgh

The study of pragmatics examines the mechanisms underlying speakers' ability to construct meaning in context and hearers' ability to infer meaning beyond what a speaker has explicitly said. These abilities are taken to depend both on the properties of what is said as well as on considerations of what isn't said. In this talk, I present a series of psycholinguistic studies that highlight how the context of alternatives provides knowledge that is brought to bear on one pragmatic phenomenon, coreference. The context of alternatives is shown to guide *how* speakers refer (probabilities over choice of referring expression), whereas coherence-driven cues regarding alternative meanings capture *who* speakers are likely to refer to (prior probabilities over choice of mention). Listeners in turn can be understood to combine these probabilities to estimate the likely referent of an ambiguous expression, as predicted by a Bayesian model of coreference. What is most intriguing about the data is the apparent independence of contributions from factors related to message meaning (implicit causality, coherence) and those related to message form (information structure). I also discuss work in two other coreference domains in which the context of alternatives is relevant: the assessment of production costs and the role of focus marking in evoking a set of alternatives.



February 26, 2016
Linguistic diversity and language contact: Amazonian perspectives

513 Lattimore Hall

Patience Epps
Associate Professor
Department of Linguistics, University of Texas at Austin



February 19, 2016
Variant-centered variation and the 'like' conspiracy

513 Lattimore Hall

Aaron Dinkin
Assistant Professor
Department of Linguistics, University of Toronto


2015


December 3, 2015
Learning syntax from millions of words

301B Meliora Hall

John Pate
Department of Linguistics, University at Buffalo

Grammar induction is the task of learning syntactic structures from strings of words, without observing those structures. Because children also do not observe syntactic structures, grammar induction systems provide a route to investigating the assumptions about grammatical form a child might make when learning syntax. However, previous grammar induction research has relied on expensive ``batch'' algorithms that re-analyze the entire dataset multiple times, and so has been limited to small datasets with only tens of thousands of word tokens. Such small data is likely too sparse to learn from word strings alone, so previous work has used word representations (such as part-of-speech tags) rather than words. The resulting systems are therefore of limited applicability to child language research.

In this talk, I will present a new streaming algorithm for Variational Bayesian Probabilistic Context Free Grammar inference that analyzes each sentence only once. This algorithm allows us to learn dependency syntax from the words (alone) of millions of word tokens and outperforms the batch algorithm both in absolute terms and when controlling for computational resources. Additionally, the results show that learning from part-of-speech tags leads to an objective function that is full of local optima that don't correspond to dependency syntax, but learning from words does not have this problem. These results improve the prospect of using grammar induction systems to understand the learning biases of syntax-acquiring children.



November 20, 2015
Selection-Coordination Theory

513 Lattimore Hall

Sam Tilsen
Assistant Professor
Department of Linguistics, Cornell University

Phonological theories commonly analyze speech utterances as composed of hierarchically organized units, such as features/gestures, segments, moras, and syllables. Yet it is not well understood why this hierarchical organization is observed. This talk presents the selection-coordination theory of speech production, which holds that hierarchical organization emerges from a recurring trend in speech development whereby children acquire coordinative regimes of control over articulatory gestures that were previously competitively selected. In this framework, segments, moras, and syllables are understood as differently-sized instantiations of the same type of motor planning unit, and these units differ with regard to when in the course of development they dominate the organization of gestural selection. This talk will show how the theory provides explanatory accounts of patterns in phonological development, cross-linguistic variation in phonological structure, and articulatory patterns in speech.

Phonological theories commonly analyze speech utterances as composed of hierarchically organized units, such as features/gestures, segments, moras, and syllables. Yet it is not well understood why this hierarchical organization is observed. This talk presents the selection-coordination theory of speech production, which holds that hierarchical organization emerges from a recurring trend in speech development whereby children acquire coordinative regimes of control over articulatory gestures that were previously competitively selected. In this framework, segments, moras, and syllables are understood as differently-sized instantiations of the same type of motor planning unit, and these units differ with regard to when in the course of development they dominate the organization of gestural selection. This talk will show how the theory provides explanatory accounts of patterns in phonological development, cross-linguistic variation in phonological structure, and articulatory patterns in speech.



November 16, 2015
Learning to become a native listener

301B Meliora Hall

Reiko Mazuka
Riken Brain Science Institute

The goal of our research is identify the processes by which human infants with no prior linguistic knowledge and highly limited cognitive skills acquire the ability to understand and manipulate highly complex language systems in a short time and without explicit instruction. The talk will present results from studies that investigated how Japanese infants learn certain characteristics of Japanese phonology, knowledge of which is considered prerequisite for the acquisition of abstract, symbolic properties of language. One distinctive characteristics of Japanese phonology, for example, is duration-based vowel distinction, which can be used for both lexical differentiation (obasan vs obaasan) and for phrasal/prosodic differentiation (dakara vs dakaraaa). How do babies learn that lexical and prosodic information systems are different, and how do they determine whether a given long or short vowel is being used lexically or prosodically? Our studies compare babies' behavioral responses with speech input provided by their environment, computational acquisition models, and brain imaging studies. The talk will discuss results from these and related studies, highlighting the unique opportunities that Japanese language properties provide to disentangle fundamental questions pertaining to acquisition.



November 6, 2015
TUTORIAL ON DYNAMICAL SYSTEMS ANALYSIS IN THEORETICAL SYNTAX AND PHONOLOGY

513 Lattimore Hall

Khalil Iskarous
Department of Linguistics, University of California

Many contributors to theoretical syntax and phonology, e.g. Goldsmith, Uriagereka, Vergnaud, Idsardi, Smolensky, and Prince, have used dynamical systems analysis to make sense of some fundamental computational properties of natural language. Yet, dynamical systems analysis does not usually form part of the linguistics curriculum. In this tutorial, dynamical systems analysis will be introduced from scratch, and then some basic analogies will be drawn between deep computational concepts in linguistic theory, and dynamical computation.



September 28, 2015

Hylan 105

Jasmeen Kanwaal
Linguistics and Cognitive Science at UC San Diego



September 25, 2015
Informationally redundant utterances trigger pragmatic inferences

513 Lattimore Hall

Ekaterina Kravtchenko
PhD student in Vera Demberg's lab
Saarland University

Work in pragmatics shows that speakers typically avoid stating information already given in the discourse (Horn, 1984). However, it's unclear how listeners interpret utterances which assert material that can be inferred using prior knowledge. We argue that informationally redundant utterances can trigger context-dependent implicatures, which increase utterance utility in line with listener expectations (Atlas & Levinson, 1981; Horn, 1984). In two experiments, we look at utterances which refer to event sequences describing common activities (scripts, such as 'going to a grocery store').

The first experiment shows that listeners may assign informationally redundant event mentions (such as 'John went to the store. He paid the cashier!') an 'informative' pragmatic interpretation, by reinterpreting the activity in question as relatively atypical in context (i.e. 'John does not typically pay the cashier'). Such a (re-)interpretation does not arise for event mentions that are informative either a priori, or in context. A second experiment, which replaced the exclamation point at the end of the utterance with a period, however, shows that the effect is substantially tempered when the utterance is not otherwise marked as important or surprising. This shows that discourse status, independent of the linguistic content of an utterance, can influence the likelihood of it giving rise to a specific pragmatic inference.

Overall, these studies show that explicit mention of highly inferable events may be systematically reconciled with an assumption that a speaker is being informative, giving rise to context-dependent implicatures regarding event typicality. This effect, however, is modulated by the informational status of the utterance, possibly similar to the effects of prosody on implicature generation. Overall, the results suggest that excessive informational redundancy of event utterances is perceived as anomalous, and that listeners alter their situation models in order to accommodate it.



June 3, 2015
Abstract knowledge and item-specific experience in language processing and change

Kresge Room, Meliora 269

Emily Morgan
PhD Graduate Student
Department of Linguistics, University of California, San Diego

A pervasive question in language research is how we reconcile abstract/generative linguistic knowledge with knowledge of specific lexical items' idiosyncratic properties. For example, many binomial expressions of the form "X and Y" have a preferred order (e.g. "bread and butter" > "butter and bread"), but the source of these preferences remains largely unknown. Preferences might arise from violable constraints referencing the semantic, phonological, and lexical properties of the component words (e.g. short before long), or they might also derive from frequency of one's experience with a binomial's two orderings. I will argue that abstract knowledge and item-specific experience trade off rationally and gradiently in determining binomial ordering preferences: the more experience a speaker has with a binomial, the more heavily they rely on that experience over abstract constraints. I will demonstrate that this tradeoff is crucial for explaining both online sentence processing and language structure/change: In forced-choice judgments and self-paced reading tasks, I will demonstrate that the source of preferences gradually shifts from abstract knowledge to item-specific experience as amount of experience increases. Moreover, using corpus analysis and computational modeling, I will demonstrate that the strength of ordering preferences also depends upon the amount of experience one has with an expression: abstract knowledge creates weak preferences for infrequently attested items, while item-specific experience strengthens those preferences for more frequently attested items. These findings support theories of grammar that flexibly allow for both compositional generation and holistic reuse of stored examples.



May 29, 2015
Cross-linguistic differences in disagreements arising from descriptive and evaluative propositions

513 Lattimore Hall

E. Allyn Smith
Associate Professor
University of Quebec at Montreal

Semanticists, pragmaticists, philosophers, and others have recently been interested in disagreements arising from evaluative propositions (especially those containing so-called 'predicates of personal taste"), as in (1), and their theoretical implications.

  1. A: This soup is tasty. B: No it isn't.
  2. A: Rochester is in Quebec. B: No it isn't.
  3. A: This soup is tasty, in my opinion. B: # No it isn't

The idea is that, as compared to a descriptive proposition like (2A), evaluative propositions express the opinion of the speaker, but refuting them doesn't seem to deny that the speaker holds such an opinion (Kolbel 2003, Lasersohn 2005, etc.). This would, in principle, make them similar to sentences like (3), but here, direct disagreement is not felicitous (Stevenson 2007). Stevenson argued that the same can be said of epistemic modals such as 'might': you can say 'no' to the fact that Elizabeth might visit if you know otherwise, but if someone says that they don't know whether Elizabeth will visit, saying 'no' cannot indicate that she won't.

In this talk I will present offline felicity judgment data from English and Spanish two-turn oral dialogues showing that there are differences with respect to these judgments, which creates a further puzzle. I will compare various explanations for these new data, drawing on ideas present in Stojanovic 2007 and Umbach 2012. I will further discuss the interplay of various factors in these data, including cultural politeness differences (introducing data from another dialect of Spanish with known differences in cultural norms). As time permits, I will also present data from Catalan and French.



May 8, 2015
Bayesian pragmatics: lexical uncertainty, compositionality, and the typology of conversational impli

513 Lattimore Hall

Roger Levy
Associate Professor
Department of Linguistics, University California San Diego

A central scientific challenge for our understanding of human cognition is how language simultaneously achieves its unbounded yet highly context-dependent expressive capacity. In constructing theories of this capacity it is productive to distinguish between strictly semantic content, or the "literal" meanings of atomic expressions (e.g., words) and the rules of meaning composition, and pragmatic enrichment, by which speakers and listeners can rely on general principles of cooperative communication to take understood communicative intent far beyond literal content. However, there has historically been only limited success in formalizing pragmatic inference and its relationship with semantic composition. Here I describe recent work within a Bayesian framework of interleaved semantic composition and pragmatic inference, building on the Rational Speech-Act model of Frank and Goodman and the game-theoretic work of Degen, Franke, and Jäger. These models formalize the goal of linguistic communicative acts as bringing the beliefs of the listener into as close an alignment as possible with those of the speaker while maintaining brevity. First I show how two major principles of Levinson's typology of conversational implicature fall out of the most basic Bayesian models: Q(uantity) implicature, in which utterance meaning is refined through exclusion of the meanings of alternative utterances; and I(nformativeness) implicature, in which utterance meaning is refined by strengthening to the prototypical case. Q and I are often in tension; I show that the Bayesian approach constitutes the first theory making quantitative predictions regarding their relative strength in interpretation of a given utterance, and present evidence from a large-scale experiment on interpretation of utterances such as "I slept in a car" (was it my car, or someone else's car?) supporting the theory's predictions. I then turn to questions of compositionality, focusing on two of the most fundamental building blocks of semantic composition, the words "and" and "or". Canonically, these words are used to coordinate expressions whose semantic content is least partially disjoint ("friends and enemies", "sports and recreation"), but closer examination reveals that they can coordinate expressions whose semantic content is in a one-way inclusion relation ("roses and flowers", "boat or canoe") or even in a two-way inclusion relation, or total semantic equivalence ("oenophile or wine-lover"). But why are these latter coordinate expressions used, and how are they understood? Each class of these latter expressions falls out as a special case of our general framework, in which their prima facia inefficiency for communicating their literal content triggers a pragmatic inference that enriches the expression's meaning in the same ways that we see in human interpretation. More broadly, these results illustrate the explanatory reach and power of recursive, compositional probabilistic models for the study of linguistic meaning and pragmatic communication.



May 6, 2015
Language comprehenders as reverse engineers

Kresge Room, Meliora 269

Roger Levy
Associate Professor
Department of Linguistics, University California San Diego

From the last several decades of research we know that human language comprehension is highly optimized to the demands presented by real-time spoken and written input. We are finely tuned to the detailed statistics of our linguistic experience, yet retain an extraordinary capacity to generalize beyond that experience to novel comprehension environments. A leading hypothesis regarding this capacity for generalization is that comprehension involves implicitly deploying structured generative models of language production, with the comprehender effectively "reverse-engineering" the speaker's intended message through Bayesian inference. Here I present work elucidating the structure of these generative models under this hypothesis. In the first part of the talk I discuss our recent work on noisy-channel models of language comprehension, in which a speaker's intended message is distorted through processes including speaker error, perceptual noise, and memory limitation before analysis by our system of language understanding. I present results showing how noisy-channel comprehension can lead comprehenders to entertain and even adopt grammatical interpretations of an input inconsistent with its literal content. I also present results extending the range of documented noise operations. In the second part of the talk I explore how comprehenders model speaker choice in syntactic alternations influenced by multiple factors. For example, preferences in the dative alternation ("Pat gave Kim a book" versus "Pat gave a book to Kim") have been argued to reflect (i) differences in the shade of meaning encoded by each syntactic option, and (ii) principles of optimal linear ordering such as putting short constituents before long. If both (i) and (ii) are true and comprehenders model the syntactic choice as a generative process driven by these multiple causes, we should see explaining-away effects between linear-ordering optimality and inferred meaning intent in comprehension. We demonstrate these effects for the first time. More generally, this work underscores the power of using generative models to account for human language comprehension, and opens the door to a range of further explorations of the structure of this generative knowledge.



May 1, 2015
Exploring the limits of syntactic structures

513 Lattimore Hall

Jean-Pierre Koenig and Karin Michelson
Professor and Chair
Department of Linguistics, University at Buffalo SUNY

Syntax has played a central role in investigations of the nature of human languages. But, there are at least two distinct ways of conceiving of syntax: the set of rules that enable speakers and listeners to combine the meaning of expressions (compositional syntax), or the set of formal constraints on the combinations of expressions (formal syntax). The question that occupies us in this talk is whether all languages include a significant formal syntax component or whether there are languages in which most syntactic rules are exclusively compositional. Our claims are (1) that Oneida (Northern Iroquoian) has almost no formal syntax component and is very close to a language that includes only a compositional syntax component and (2) that the little formal syntax Oneida has does not require making reference to syntactic features. Our analysis of Oneida suggests that what is often taken as characteristic of human languages (e.g., syntactic selection/argument structure, syntactic binding, syntactic unbounded dependencies, syntactic parts of speech) is merely overwhelmingly frequent in the world's languages. Our research also suggests that a critical function of compositional syntax is to manage the binding of semantic variables, a function anticipated by Quine's work on the nature of (semantic) variables.



April 10, 2015
Multiple Perspectives on Understanding Prosodic Development

513 Lattimore Hall

Jill Thorson
Postdoctoral Research Associate
Communication Analysis and Design Laboratory, Northeastern University

Infants are born with sensitivities to their native language's prosody (i.e., melody and rhythm). My research program is designed to understand the ways in which this attunement to prosody affects early language development over the first years of life. Specifically, this work concentrates on how prosody impacts early attentional processing, word learning, and speech production, with a focus on the importance of including a phonological account alongside an acoustic-phonetic one. Two lines of inquiry deploy a variety of research methods (e.g., eyetracking, corpora, and speech elicitation) and consider the role of prosody from a perceptual and a productive perspective. Additionally, the role of technology in methodological innovation is explored, such as how touch-screen interfaces and voice synthesis can effectively address questions regarding language learning in atypical populations. Future research on early language acquisition will investigate the benefits of integrating these various perspectives and methodologies, and how this multi-faceted approach can better understand typical and atypical prosodic development.



April 10, 2015
Learning to Execute Natural Language

Meliora 366

Percy Liang
Assistant Professor of Computer Science
Stanford University

A natural language utterance can be thought of as encoding a program, whose execution yields its meaning. For example, "the tallest moun tain" denotes a database query whose execution on a database produces "Mt. Everest." We present a framework for learning semantic parsers that maps utterances to programs, but without requiring any annotated programs. We first demonstrate this paradigm on a question answering task on Freebase. We then show how that the same framework can be extended to the more ambitious problem of querying semi-structured Wikipedia tables. We believe that our work provides a both a practical way to build natural language interfaces and an interesting perspective on language learning that links language with desired behavior.



February 6, 2015
Cross-linguistic differences in disagreements arising from descriptive and evaluative propositions

513 Lattimore Hall

E. Allyn Smith
Assistant Professor
University of Quebec at Montreal

Semanticists, pragmaticists, philosophers, and others have recently bee n interested in disagreements arising from evaluative propositions (especially those containing so-called 'predicates of personal taste"), as in (1), and their theoretical implications.

  1. A: This soup is tasty. B: No it isn't.
  2. A: Rochester is in Quebec. B: No it isn't.
  3. A: This soup is tasty, in my opinion. B: # No it isn't.

The idea is that, as compared to a descriptive proposition like (2A), valuative propositions express the opinion of the speaker, but refuting them doesn't seem to deny that the speaker holds such an opinion (Kolbel 2003, Lasersohn 2005, etc.). This would, in principle, make them similar to sentences like (3), but here, direct disagreement is not felicitous (Stevenson 2007). Stevenson argued that the same can be said of epistemic modals such as 'might': you can say 'no' to the fact that Elizabeth might visit if you know otherwise, but if someone says that they don't know whether Elizabeth will visit, saying 'no' cannot indicate that she won't.

In this talk I will present offline felicity judgment data from English and Spanish two-turn oral dialogues showing that there are differences with respect to these judgments, which creates a further puzzle. I will compare various explanations for these new data, drawing on ideas present in Stojanovic 2007 and Umbach 2012. I will further discuss the interplay of various factors in these data, including cultural politeness differences (introducing data from another dialect of Spanish with known differences in cultural norms). As time permits, I will also present data from Catalan and French.



January 23, 2015
The Dene verbal compound: representing the complex inflectional system of the Dene (Athabaskan) verb

513 Lattimore Hall

Joyce McDonough
Associate Professor
University of Rochester, Linguistcs and Brain and Cognitive Sciences

Within a Word and Paradigm appro ach to morphology words, not morphemes, are the basic units in the lexicon (Milin et. al., 2009; Ackerman & Malouf, 2012; Blevins, 2014, 2015; Plag & Baayens, 2008, Baayens et.al. 2014, 2015). Fully inflected words are lexical units, organized into paradigms, making paradigms, which encode the relationship between words, fundamental objects in the lexicon. In this framework, much work has been done on nominal inflection and derivational systems. Much less has been done on the more complex inflectional systems of verbal morphology, which may encode rich morphosyntactic functions. In this talk I will lay out the structure of a typologically unusual and highly complex system, the Dene (Athabaskan) verb word, traditionally captured by a position class template of around 23 prefix positions used to order verbal morphemes. I'll demonstrate that is an unworkable system. Instead, the Dene verb is a unusual but simple and principled variation on compounding. The model is base d on evidence from phonetic studies and lexical patterns.


2014


December 12, 2014
"She be acting like she's black": Linguistic blackness among Korean

513 Lattimore Hall

Elaine Chun
Associate Professor (English)
University of South Carolina

Research on the use of African American English (AAE) by speakers who do not identify as African American has largely focused on how performances of racial 'crossing' (Rampton 1995) may be used to construct masculinity, often in ways that reproduce stereotypes of race and gender (Bucholtz 1999; Chun 2001; Reyes 2005; Bucholtz and Lopez 2011). Such work has drawn attention to at least a few important facts: first, a variety that linguists have classified as an ethnolect of a particular ethnic group can be used in meaningful ways by speakers outside the group; second, ethnolectal features are complexly related to other social dimensions, such as gender and class; and third, language practices have sociocultural consequences for individual identities and community ideologies.

Two concerns that remain are (1) how linguists can productively continue the important project of ethnolectal description--for example, identifying distinctive elements of AAE in ways that recognize meaningful outgroup language use, and (2) how linguists can analyze outgroup uses of AAE without simplistically suggesting that these uses necessarily reproduce stereotypes of black masculinity. In order to address these concerns, I consider the sociolinguistic status of features described by linguists as belonging to AAE, namely, six lexical or morpho-syntactic elements: habitual be, neutral third-person singular verb (e.g., she don't), multiple negation, ain't, the address term girl, and the pronoun y'all. By examining about 100 tokens used by five female youth who identify as Korean American, I discuss some of the conceptual challenges that arise for an ethnolectal model of language and draw on some sociolinguistic and linguistic anthropological concepts, such as ideology, indexicality, persona, stance, voice, and authentication to address these challenges. Finally, I show how qualitative methods of discourse analysis, which attend to the emergent complexity of how language can invoke social meanings, can usefully contribute to our understanding of how linguistic forms relate to social meaning, yet in ways that may still remain complementary with our projects of ethnolectal description.



November 14, 2014
PraatR: An architecture for controlling the phonetics software Praat with the R programming language

513 Lattimore Hall

Aaron Albin
Indiana University-Bloomington

An increasing number of researchers are using the R programming language (http://www.r-project.org/)for the visualization and statistical modeling of phonetic data. However, R's capabilities for analyzing soundfiles and extracting acoustic measurements are still limited compared to free-standing phonetics software such as Praat (http://www.fon.hum.uva.nl/praat/). As such, it is typical to extract the acoustic measurements in Praat, export the data to a textfile, and then import this file into R for analysis. This process of manually shuttling data from one program to the other slows down and complicates the analysis workflow.

This workshop will feature an R package (`PraatR') designed to overcome this inefficiency. Its core R function sends a shell command to the operating system that invokes the command-line form of Praat with an associated Praat script. This script imports a file, applies a Praat command to it, and then either brings the output directly into R or exports the output as a textfile. Since all arguments are passed from R to Praat, the full functionality of the original Praat command is available inside R, making it possible to conduct the entire analysis within a single environment. Moreover, with the combined power of these two programs, many new analyses become possible. Further information on PraatR can be found at http://www.aaronalbin.com/praatr/.

At this workshop, the creator of PraatR will first present a conceptual overview of the package, followed by several hands-on exercises on participants' laptop computers illustrating its range of functionality. At the end of the workshop, the presenter will be available for brief consultations about how PraatR can help you in your own research.

Attendance is limited to 20 participants on a first-come-first-served basis. If you are interested in coming to the workshop, please send an e-mail stating so to Wil Rankinen at wrankine@ur.rochester.edu.



October 17, 2014
Gesture as a mechanism of change

Meliora 366

Susan Goldin-Meadow
Beardsley Ruml Distinguished Service Professor
University of Chicago

The spontaneous gestures that people produce when they talk can index cognitive instability and reflect thoughts not yet found in speech. But gesture can go beyond reflecting thought to play a role in changing thought. I consider whether gesture brings about change because it is itself an action and thus brings action into our mental representations. I provide evidence for this hypothesis but suggest that it's not the whole story. Gesture is a special kind of action--it is representational and thus more abstract than direct action on objects, which may be what allows gesture to play a role in learning.



September 12, 2014
Morphology as a complex discriminative system

513 Lattimore Hall

Jim Blevins
Department of Theoretical and Applied Linguistics, Cambridge University

A number of converging lines of research have recently coalesced into an approach to morphology that combines classical WP models with contemporary data-driven methodologies. One component of this approach is distributional view of language structure and language learning. Another is a complex system conception of morphological patterns and inventories. These components are united by a dynamic communicative pressures, rather than in terms of derivational relations or static constraint satisfaction. This talk outlines some of the properties and implications of this perspective and reviews evidence that supports this type of approach over simple system models of morphology.



April 24, 2014
The temporal structure of auditory perceptual experience

Kresge Room, Meliora 269

David Poeppel
Professor
Psychology and Neural Science Cognition & Perception, New York University

Speech and other dynamically changing auditory signals (and also visual stimuli) typically contain critical information required for
successful decoding at multiple time scales. What kind of neuronal infrastructure forms the basis for the requisite multi-time resolution processing? A series of neurophysiological experiments suggests that intrinsic neuronal oscillations at different, ‘privileged’ frequencies may provide some of the underlying mechanisms. In particular, to achieve parsing of a naturalistic input signal into manageable chunks, one mesoscopic-level mechanism consists of the sliding and resetting of temporal windows, implemented as phase resetting of intrinsic oscillations on privileged time scales. The successful resetting of neuronal activity provides time constants – or temporal integration windows – for parsing and decoding signals. One emerging generalization is that acoustic signals must contain some type of
edge, i.e. a discontinuity that the listener can use to chunk the signal at the appropriate granularity. Although the ‘age of the edge’
is over for vision, acoustic edges likely play an important (and slightly different) causal role in the successful perceptual analysis of complex auditory signals.



April 11, 2014
Indexicals, Centers and Perspective

Meliora 366

Craige Roberts
Professor, Department of Linguistics
The Ohio State University

I argue for a theory of demonstratives in which:

(a) they're anaphoric (as I argued in Roberts 2002) and in that respect are definites like definite descriptions and pronouns,
but:

(b) they're unlike the other definites in that they really are essentially indexical, something that isn't adequately captured by King (2001), Roberts (2002), or Elbourne (2008),
and that:

(c) we can improve on the account of indexicality in Kaplan (1977), as criticized by Heim 1985, by adopting a view of indexicals in which their central feature is anchoring to a Discourse Center, a self-attributing doxastic agent.

A Discourse Center is a counterpart in the context of utterance of the notion of a Center in Lewis (1979), the latter theory modified as in Stalnaker (2008). Discourse Centers are argued to play three kinds of roles in interpretation:

(i) they are crucial features of a theory of de se interpretation, as in Lewis/Stalnaker;
but here they serve two new roles as well:

(ii) they are the presupposed anaphoric anchors for indexicals; and

(iii) they also serve as arguments of a perspective operator, in a modification of Aloni (2001), permitting an account of de re belief attributions involving all kinds of definite NPs, including indexicals themselves.

Among other things, this will permit a more flexible, perspicuous account of shifted indexicals in languages like Amharic (Schlenker 2003, Anand & Nevins 2004, Deal 2013, Sudo 2012), and a natural account of so-called fake indexicals of Kratzer (2009).



March 28, 2014
Statistical learning in semi-real language acquisition

CSB 601

Casey Lew-Williams
Department of Communication Sciences and Disorders, Northwestern University

Infants and toddlers have a prodigious ability to find structure (such as words) in patterned input (such as language). Learning regularities between sounds and words often occurs seamlessly in early development, leading some to conclude that statistical learning plays a role in enabling language in the first place. This might be true, or alternatively, it might be an irrelevant artifact of distilled laboratory tasks. The ultimate explanatory power depends partially on whether we define statistical learning narrowly (transitional probabilities between syllables) or broadly (any kind of input-based pattern extraction), and partially on whether statistical learning can scale up to explain natural language acquisition. Here I ask: Can statistical learning withstand the complexity inherent in (somewhat) real learning environments? I will present a series of studies that test how infants learn when presented with variability in utterance length, word length, number of talkers, social/communicative cues, and frequency resolution. To conclude, I will briefly address the question of scalability by turning to an important outcome of early statistical learning -- the ability to process language efficiently in real time -- which falls by the wayside when listeners don't accumulate language experience like a baby.



February 14, 2014
Building a Bayesian bridge between the physics and the phenomenology of social interaction

Meliora 366

Nathaniel Smith
Research Associate, Institute for Language, Cognition and Computation
University of Edinburgh

What is word meaning, and where does it live? Both naive intuition and scientific theories in fields such as discourse analysis and socio- and cognitive linguistics place word meanings, at least in part, outside the head: in important ways, they are properties of speech communities rather than individual speakers. Yet, from a neuroscientific perspective, we know that actual speakers and listeners have no access to such consensus meanings: the physical processes which generate word tokens in usage can only depend directly on the idiosyncratic goals, history, and mental state of a single individual. It is not clear how these perspectives can be reconciled. This gulf is thrown into sharp perspective by current Bayesian models of language processing: models of learning have taken the former perspective, and models of pragmatic inference and implicature have taken the latter. As a result, these two families of models, though built using the same mathematical framework and often by the same people, turn out to contain formally incompatible assumptions. Here, I'll present the first Bayesian model which can simultaneously learn word meanings and perform pragmatic inference. In addition to capturing standard phenomena in both of these literatures, it gives insight into how the literal meaning of words like "some" can be acquired from observations of pragmatically strengthened uses, and provides a theory of how novel, task-appropriate linguistic conventions arise and persist within a single dialogue, such as occurs in the well-known phenomenon of lexical alignment. Over longer time scales such effects should accumulate to produce language change; however, unlike traditional iterated learning models, our simulated agents do not converge on a sample from their prior, but instead show an emergent bias towards belief in more useful lexicons. Our model also makes the interesting prediction that different classes of implicature should be differentially likely to conventionalize over time. Finally, I'll argue that the mathematical "trick" needed to convince word learning and pragmatics to work together in the same model is in fact capturing a real truth about the psychological mechanisms needed to support human culture, and, more speculatively, suggest that it may point the way towards a general mechanism for reconciling qualitative, externalist theories of social interaction with quantitative, internalist models of low-level perception and action, while preserving the key claims of both approaches.


2013


December 5, 2013
The Detachment Principle and the syntax of pragmatic particles

Jila Ghomeshi
Department of Linguistics, University of Manitoba



November 22, 2013

Elika Bergelson
University of Rochester, Aslin Lab, Brain & Cognitive Sciences



November 19, 2013
Danes call People with Down syndrome 'mongol': politically incorrect language and ethical engagement

Don Kulick
Department of Comparative Human Development at the University of Chicago



November 8, 2013

Scott Fraundorf
University of Rochester, Jaeger Lab, Brain & Cognitive Science



October 31, 2013
Field Linguistics Talk Series

Eva-Maria Roessler



October 25, 2013
Determining If A Language Underwent Prehistoric Creolization

Scott Paauw
University of Rochester, Department of Linguistics



October 4, 2013
A Category Neutral Simulative Plural: Evidence From Turkish

Solveiga Armoskaite
University of Rochester, Department of Linguistics



September 20, 2013
Constraints of the Binding Theory: Evidence from Visual World Eye-Tracking

CLS Colloquia Room

Jeffrey T. Runner
Associate Professor, Linguistics and Brain & Cognitive Sciences
University of Rochester, Department of Linguistics



May 14, 2013
Language documentation among the Bagyeli hunter-gatherers of Cameroon

Nadine Borchardt
Humboldt University, Berlin



April 26, 2013
What's at issue? Exploring content in context

Judith Tonhauser
Associate Professor
Department of Linguistics, The Ohio State University



April 11, 2013
Implicit and explicit neural mechanisms supporting language processing

Laura Batterink
University of Oregon



February 22, 2013
Gapping is VP-ellipsis

Maziar Toorsarvandi
American Council of Learned Societies New Faculty Fellow Department of Linguistics and Philosophy



February 20, 2013
Polarity particles

Floris Roelofsen
Research Associate, Institute for Logic, Language and Computation
University of Amsterdam



February 8, 2013
Grammatical Number and Individuation

Scott Grimm
Postdoctoral Researcher Department of Translation and Language Sciences
Pompeu Fabra University



February 7, 2013
The Phonology of Seneca

Wallace Chafe and Marianne Mithun
Professors of Linguistics, Department of Linguistics
University of California, Santa Barbara



February 4, 2013
Investigating the semantic/pragmatic interface through sign language structure: the case of scalar implicature

Kathryn Davidson
Postdoctoral Fellow, Linguistics Department
University of Connecticut



January 18, 2013
QUDs and at-issueness in Yucatec Maya attitude reports

Scott AnderBois
Visiting Faculty Linguistics Department


2012


November 9, 2012
The Short Answer: Implications for Direct Compositionality (and vice-versa)

Pauline Jacobson
Brown University



October 14, 2012
The Ket language of Siberia

Edward Vajda
Western Washington University



June 7, 2012
The Substance of Song

Sally Treloyn
University of Melbourne



April 9, 2012

Klinton Bicknell
Department of Psychology
UC San Diego



April 4, 2012

Bozena Pajak
Department of Linguistics
UC San Diego



March 30, 2012

Emily Tucker Prud'hommeaux
Computer Science
Oregon Health & Science University



February 17, 2012
Three challenges of verb learning, and how toddlers use linguistic context to meet them

Sudha Arunachalam
Speech, Language & Hearing Sciences
Boston University


2011


December 5, 2011

Victor Kuperman
Department of Linguistics and Languages
McMaster University



October 27, 2011

Sarah Brown-Schmidt
Department of Psychology
University of Illinois at Urbana-Champaign



October 13, 2011

Chris Potts
Department of Linguistics
Stanford University



October 4, 2011

Meghan Sumner
Department of Linguistics
Stanford University



May 12, 2011

Cynthia Fisher
Psychology Department
University of Illinois



May 9, 2011

Herb Clark
Professor of Psychology
Stanford University



April 21, 2011
Semantic Similarity, Predictability, and Models of Sentence Processing

Doug Roland and Hongoak Yun
The University of Buffalo



April 18, 2011
Cue-Based Argument Interpretation

Thomas Hörberg
Department of Linguistics
Stockholm University



April 14, 2011
The Encoding-Retrieval Relationship in Sentence Comprehension (and Production)

Philip Hofmeister
University of California - San Diego



April 6, 2011

Raphael Berthele
University of Bern



March 31, 2011
Meaning, Context and Representation

Ed Holsinger
Department of Lingustics
University of Southern California



March 29, 2011
Don't rush the navigator: Audience design in language production is hard to establish, but easier to maintain

Jennifer M. Roche
Department of Psychology
University of Memphis



March 16, 2011
Anuenue Kukona

Anticipation, local coherences, and the self-organization of cognitive structure in sentence processing
Department of Psychology
University of Connecticut



February 15, 2011
Game Theoretic Pragmatics

Gerhard Jaeger
Department of Linguistics
University of Tuebingen


2010


May 24, 2010
Uncovering the Mechanisms of Audiovisual Speech Perception: Architecture, Decision Rule, and Capacity

Nicholas Altieri
Psychological and Brain Sciences
Indiana University



April 21, 2010
Narrative Combinatorics: Roleshifting Versus "Aspect" in ASL Grammar

Frank Bechter



April 19, 2010
Learning Biases and the Emergence of Typological Universals of Syntax

Jennifer Culbertson
Cognitive Science Department
Johns Hopkins University



April 14, 2010
Phonological Information Integration in Speech Perception

Noah H. Silbert
Psychological and Brain Sciences
Indiana University



April 12, 2010
Predicting and Explaining Babies

LouAnn Gerken
Professor of Psychology and Linguistics
University of Arizona



March 15, 2010
From Sounds to Words: Bayesian Modeling of Early Language Acquisition

Sharon Goldwater
School of Informatics
University of Edinburgh



February 22, 2010
Implicit Learning in the Language Production System is Revealed in Speech Errors

Gary Dell
Psychology
University of Illinois at Urbana-Champaign



January 20, 2010
Fast, Smart and Out of Control

Jesse Snedeker
Department of Psychology
Harvard University


2009


December 7, 2009
The Role of Phonetic Detail, Auditory Processing and Language Experience in the Perception of Assimilated Speech

Meghan Clayards
Centre for Research on Language Mind and Brain
McGill University



October 19, 2009
The Irrelevance of Hierarchical Structure to Sentence Processing

Stefan Frank
Postdoc, Institute for Language, Logic and Computation
University of Amsterdam



September 14, 2009
The Number of Meanings of English Number Words

Chris Kennedy
Department of Linguistics
University of Chicago



June 26, 2009
Self-Applicable Probabilistic Inference Without Interpretive Overhead

Oleg Kiselyov (FNMOC) and Chung-chieh Shan (Rutgers)



June 4, 2009
Learning to Learn, Simplicity, and Sources of Bias in Language Learning

Amy Perfors
University of Adelaide



April 15, 2009
Learning a Talker's Speech

Tanya Kraljic
Center for Research in Language
UC San Diego



April 8, 2009
The Phonetic Traces of Lexical Access

Matt Goldrick
Department of Linguistics
Northwestern University



February 20, 2009
Discourse-Driven Expectations in Sentence Processing

Hannah Rohde
Department of Linguistics
Northwestern University



February 12, 2009
Big Changes in Object Recognition Between 18 and 24 Months: Words, Categories and Action

Linda Smith
Indiana University


2008


December 4, 2008
What Do Words Do?

Gary Lupyan
University of Pennsylvania



November 11, 2008
What Were They Thinking? Finding and Extracting Opinions in the News

Claire Cardi
Cornell University



October 16, 2008
Production of Ungrammatical Utterances: The Case of Resumptive Pronouns

Ash Asudeh
Institute of Cognitives of Science & School of Linguistics and Language Studies
Carleton University



October 16, 2008
The Phonetics of Phonological Quantity in Inari Saami

Ida Toivonen
School of Linguistics and Applied Language Studies
Carleton University



October 6, 2008
Towards Discourse Meaning: Complexity of Dependencies at the Discourse Level and at the Sentence Level

Aravind Joshi
Computer Science
University of Pennsylvania



April 18, 2008
Bridging the Gap between Syntax and the Lexicon: Computational Models of Acquiring Multiword Lexemes

Suzanne Stevenson
Computer Science
University of Toronto



April 9, 2008
Inducing Meaning from Text

Dan Jurafsky
Linguistics
Stanford University



March 20, 2008
The Parallel Architecture and its Role in Cognitive Science

Ray Jackendoff
Center for Cognitive Studies
Tufts University


2007


December 4, 2007
Disfluencies in Dialogue: Attention, Structure and Function

Hannele Nicholson
Lingustics
Cornell University



September 25, 2007
Determinants of parsing complexity: A computational and empirical investigation

Shravan Vasishth
Linguistics
University of Potsdam



September 24, 2007
The Event-Related Optical Signal (Eros): A New Neuroimaging Tool for Language Processing Research

Susan Garnsey
Psychology
University of Illinois at Urbana-Champaign



May 14, 2007

Lisa Pearl
Linguistics
University of Maryland



May 2, 2007
Encoding and Retrieving Syntax with Prosody

Michael Wagner
Linguistics
Cornell University



April 26, 2007
Probabilistic Models of Adaptation in Human Parsing

Frank Keller
HRC
University of Edinburgh



April 18, 2007
Expectations, locality, and competition in syntactic comprehension

Roger Levy
Linguistics
UCSD



April 12, 2007

Philip Hofmeister
Linguistics
Stanford University



April 6, 2007
A Cognitive Substrate for Human-Level Intelligence

Nick Cassimatis
Computer Science
RPI



February 21, 2007
Linguistic Knowledge is Probabilistic: Evidence from Pronunciation

Suzanne Gahl
University of Chicago


2003–2004


November 3, 2003
Statistical Learning: What Goes In, and What Comes Out

Jenny Saffran
Department of Psychology
University of Wisconsin Madison


2002–2003


May 28, 2003
From Ears to Categories: Intermediate Steps in Speech Recognition.

John Kingston
Department of Linguistics
University of Massachusetts



May 5, 2003
The Mapping of Sound Structure to the Lexicon: Evidence from Normal Subjects and Aphasic patients.

Sheila Blumstein
Department of Cognitive and Linguistic Sciences
Brown University



March 19, 2003
Interpreting and Anticipating Reference in Discourse.

Elsi Kaiser
Department of Linguistics
University of Pennsylvania



March 17, 2003
The Horror: Speech Errors and Phonological Production Models.

Harlan Harris
Department of Linguistics
University of Illinois, Urbana-Champaign



February 7, 2003
Relating Attention to Intention of Information Structure

Craige Roberst
Department of Linguistics
Ohio State University



January 31, 2003
Presupposition: The Interaction of Conventional and Conversational Implicature.

Craige Roberts
Department of Linguistics
Ohio State University



January 28, 2003
Information Structure in Discourse: A Basic Pragmatic Framework.

Craige Roberts
Department of Lingustics
Ohio State University



September 25, 2002
Plasticity and Nativism: Towards a Resolution of an Apparent Paradox.

Gary Marcus
Department of Psychology
New York University


2001–2002


June 18, 2002
How Do Readers Compute Word Meanings? Insights From the Triangle Model.

Mike Harm
Carnegie-Mellon University



May 21, 2002
What Language Processing Tells Us About Cognitive Science.

Tom Bever
Department of Linguistics
University of Arizona



May 20, 2002
American Landscape Painting: Aesthetics, The Golden Mean and Depth Perception.

Tom Bever
Department of Linguistics
University of Arizona



April 22, 2002
Resource Logic

Asu Asudeh
Department of Linguistics
Stanford University



April 19, 2002
It's Pat - Sexing Faces Using Only Red and Green.

Michael Tarr
Cognitive and Linguistic Sciences
Brown University



April 11, 2002
To Sign or Not To Sign: Studies of Deaf Cognition in British Signers.

Matt Dye
Centre for Deaf Studies
University of Bristol, United Kingdom



April 1, 2002
Possessives in Context

Gianluca Sorto
Department of Lingustics
University of California at Los Angeles



March 28, 2002
Symbolically Speaking.

Franklin Chang
Department of Psychology
University of Illinois at Urbana-Champaign



March 25, 2002
Understanding Intonational Phrasing

Duane Watson
Department of Brian & Cognitive Sciences
Massachusetts Institute of Technology



March 4, 2002
Another Look at Accented Pronouns: Evidence from Eye-tracking

Jennifer Venditti
Department of Linguistics
Ohio State University



February 26, 2002
The Role of Distributional Information in Speech Production: The Case of Subject-Verb Agreement.

Todd Haskell
Department of Psychology
University of Southern California



January 26, 2002
Lexical Access and Serial Order in Language Production: A Test of Freuds Continuity Thesis.

Gary S. Dell
Beckman Institute
University of Illinois, Urbana-Champaign



November 7, 2001
Constraint Satisfaction Processes in Language Production

Maryellen McDonald
Department of Psychology
University of Wisconsin, Madison



October 31, 2001
Clock Talk

J. Kathryn Bock
Beckman Institute
University of Illinois, Urbana-Champaign


2001


April 25, 2001
Understanding Spoken Words: Activation, Competition and Temporary Memory in Spoken Word Perception.

Paul Luce
Department of Psychology
State University of New York, Buffalo



April 2, 2001
Optimality in Linguistic Cognition

Paul Smolensky
Department of Cognitive Science
Johns Hopkins University


1999–2000


April 25, 2000
He vs. She: The Use of Gender in On-line Pronoun Comprehension

Jennifer Arnold
Department of Psychology
University of Pennsylvania



April 13, 2000
The Processing of Temporal Relations in Discourse

Michael Walsh Dickley
Department of Linguistics
Northwestern University



March 24, 2000
Doing OT in a Straitjacket

Jason Eisner
Department of Computer Science
University of Rochester



March 1, 2000
Very Early Parameter Setting in the Computational System of Language, Varriablity in Development Across Languages, Maturation versus Learning, Impaired Development, and the Potential for a Genetics of Language

Kenneth N. Wexler
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology



January 17, 2000
What Gesture Can Tell Us About the Process of Verbalization of Spatial Information

Kita Sotaro
Max Planck Institute for Psycholinguistics



September 17, 1999
wo Ideas About Timing in Hearing and Speech.

David Poeppel
Department of Linguistics
University of Maryland - College Park


1998–1999


May 3, 1999
Burnt and Splang: Some Issues in Morphological Learning Theory

Bruce P. Hayes
Department of Linguistics
University of California at Los Angeles



April 14, 1999
The Navajo Prolongative and Lexical Structure

Carlota S. Smith
Department of Lingustics
University of Texas - Austin



March 7, 1999
Surface Cues for Pragmatic Inferences as Motivation for the Evolution of Surface Syntax

Gert Webelhuth
Department Of Lingusitics
University of North Carolina - Chapel Hill



February 25, 1999
Judgment Types, Causatives, and S-Selection

John W. Moore
Department of Linguistics
University of California at San Diego



February 23, 1999
Modality-free Phonology

Harry van der Hulst
Department of Linguistics
Leiden University



December 18, 1998
Articulatory Correlates of Ambisyllabicity in English Glides and Liquids

Bryan Gick
Haskins Laboratories
University of Connecticut


1997


September 19, 1997
Necessity, A Priority, and What Is Said

Jason Stanley
Department of Philosophy
Cornell University


1996–1997


April 25, 1997
Contact-induced Language Change and Contact-language Genesis

Sarah (Sally) Thomason
Program in Linguistics
University of Michigan



April 22, 1997
Glides, Vowels, and Ghost Consonants in Argentinian Spanish

Ellen M Kaisse
Department of Linguistics
University of Washington



March 28, 1997
When a Dog is a Cat and a Rug is a Fug: Picture Naming Errors in Aphasic and Non-aphasic Speakers

Myrna Schwartz
Moss Rehabilitation Research Institute



November 22, 1996
Modeling Collaboration for Human-computer Communication

Barbara J Grosz
Department of Computer Science
Harvard University



October 30, 1996
What Infants Remember About Utterances They Hear

Peter W Jusczyk
Department of Psychology
Johns Hopkins University



October 18, 1996
The What and Why of Compositionality

Zoltan Szabo
Department of Philosophy
Cornell University



September 27, 1996
States, Events, Time, Tense and Other Monsters

Graduiertenkollegs Integriertes Linguistik-Studium

Graham Katz
University of Tuebingen


1995–1996


April 26, 1996
Symbols and Simple Recurrent Networks in Language and Cognition

Gary F Marcus
Department of Psychology
University of Massachusetts - Amherst



April 19, 1996
Structural Repetition as Implicit Learning

J Kathryn Bock
Department of Psychology
University of Illinois - Urbana-Champaign



March 29, 1996
A Probabilistic Model of Lexical and Syntactic Access and Disambiguation

Dan Jurafsky
Department of Linguistics
University of Colorado - Boulder



December 8, 1995
The Problem with Attitudes

Jennifer Saul
Department of Philosophy
University of Sheffield



November 17, 1995
Analysis, Synonymy, and Sense

Mark E. Richard
Department of Philosophy
Tufts University



September 15, 1995
Theoretical Issues in Syntax

Yuki Kuroda
Department of Linguistics
University of California at San Diego


1994–1995


May 26, 1995
The Past, Present and Future in Language Production

Gary S Dell
Beckman Institute, University of Illinois - Urbana-Champaign



April 28, 1995
The Phrase Structure of Quantifier Scope

Tim Stowell
Department of Linguistics
University of California at Los Angeles



April 12, 1995
Birds, Bees, and Semantic Theory

David Dowty
Department of Linguistics
Ohio State University



March 3, 1995
Case in Human Grammar

Itziar Laka
Department of Linguistics
University of Rochester



February 10, 1995
Verbal Plurality and Conjunction

Peter Lasersohn
Department of Linguistics
University of Rochester



February 3, 1995
A Minimal Theory of Adverbial Quantification

Kai von Fintel
Department of Linguistics
Massachusetts Institute of Technology



December 16, 1994
Episodic -ee in English: An Argument That Thematic Relations Can Actively Constrain New Word Formation

Chris Barker
Department of Psychology
University of Rochester



December 9, 1994
The Marked Effect of Number on the Production of Subject-verb Agreement

Kathy Eberhard
Department of Psychology
University of Rochester



December 2, 1994
The Many Meanings of Demonstratives

David Braun
Department of Philosophy
University of Rochester



November 4, 1994
Using Eye-movements to Study Spoken Language Comprehension in Visual Contexts

Michael K. Tanenhaus
Department of Psychology
University of Rochester



October 25, 1994
What Are Thematic Roles?

Greg Carlson
Department of Linguistics
University of Rochester



October 21, 1994
The TRAINS Project

James F Allen
Department of Computer Science
University of Rochester



October 14, 1994
Creolization and Some Thoughts About Learning

Elissa L Newport
Department of Psychology
University of Rochester



September 30, 1994
Wh-Questions and Related Constructions in ASL

Karen Petronio
Department of Psychology
University of Rochester



September 23, 1994
Distributional Intimations of Grammatical Reclassification

Whitney Tabor
Department of Psychology
University of Rochester