University of Rochester

Colloquia Series

2023


September 29, 2023—12:30 p.m.
The trees and the roots: The role of syntacticians in language documentation

Lattimore 201

George Aaron Broadwell
Elling Eide Professor of Anthropology and Chair, Department of Linguistics
University of Florida

Many linguists who have been trained primarily in syntax eventually find themselves as part of a larger language documentation team, where a much wider range of linguistic (and non-linguistic) skills are needed. This talk discusses the bigger ecology of language work and suggests the kinds of teamwork that will lead to more robust, long-lasting, and useful scholarship.


2022


April 8, 2022—12:00 p.m.
TBA

TBA

Laurel Perkins
University of California Los Angeles



April 1, 2022—9:30 a.m.
TBA

Via Zoom (Registration required)

Alice Mitchell
University of Cologne

Register Online



March 18, 2022—12:00 p.m.
Lubukusu object marking: at the interface of pragmatics and syntax

Lattimore 513

Michael Diercks
Pomona College

This talk describes a complex system of object marking (i.e. object clitics) in Bukusu that interacts with pragmatics/discourse semantics. We show that object markers can co-occur with (aka "double") in situ lexical objects, but that this possibility is linked with 1) a discourse-given interpretation of the object, 2) the presence of a focused element inside the verb phrase, and 3) an emphatic interpretation of that focused element. That is, it appears that there is both a focus/givenness component of OM-doubling, but also an additional layer of emphasis (we show that both mirative readings and verum readings are possible). We propose that Bukusu (doubling) object markers arise via Agree relations generated by phi-features on a functional projections at the edge of vP, which include a focus operator and an operator contributing a conventional implicature generating the emphatic reading (Cruschina 2021). Therefore, rather than object marker (OM) doubling being driven by properties such as case, specificity, or linked with object shift as has been claimed for many clitic doubling languages, instead it appears that the closest empirical correlate of Bukusu OM-doubling is the semantics of focus and givenness in (for example) English intonation. Our analysis incorporates features of givenness/focus into a probe-goal system to account for intricate interactions of locality of Agree with interpretive effects of doubling.



March 4, 2022—12:00 p.m.
Morphological markedness and semantic interpretation in the nominal domain

Lattimore 513

Mary Moroney
University of Rochester

Cross-linguistic differences in morphosyntactic realization of meaning motivate distinct semantic representations at the lexical and compositional level. It remains to be determined which lexical items instantiate which differences in interpretation and how these differences can be integrated into a compositional and cross-linguistically coherent system. This presentation addresses the representations of nouns, number, and definiteness based on fieldwork data from Shan, a Southwestern Tai language of Myanmar.

Shan presents a case where plurality and definiteness are not marked on the noun using plural morphology or a determiner. Overt marking utilizes (numeral-)classifier expressions and demonstratives. By examining and comparing the interpretations that are available for languages that are typologically different in terms of their morphology, it is possible to see where cross-linguistic consistency in lexical semantics and composition should largely be maintained, as I argue for nouns, or augmented, as with classifiers. 



February 22, 2022—12:30 p.m.
Change of state: from the BECOME operator to the mereotopology of events

Lattimore 513

Louis McNally
Universitat Pompeu Fabra

How can and should we model the semantics of change of state verbs? When I first read Dowty (1979), I thought nothing could be more straightforward, more precise, and therefore more appropriate than his truth-conditionally defined BECOME operator. Over the years, however, I have seen that the semantics of change of state verbs is more complex than I thought. In particular, I have come to appreciate the subtle but important difference between analyses whose central focus is to capture truth conditions associated with verbs vs. those whose goal is to capture the mereotopological properties of events described by verbs (see, e.g., Casati and Varzi 1999 on mereotopology applied to language in general; see Piñón 1997 for one of the few explicit applications of mereotopology in verb semantics).

In this talk, I explain what I have come to understand as the main differences between the simply truth-conditional and the mereotopological ways of thinking about change of state verbs. In the latter category I would put, for example, the work of Pustejovsky (1991) and Williams (2015), although they do not themselves use the term “mereotopology.” The fact that the two approaches are not incompatible in principle, and that mereotopology has a less established tradition in semantic theory, has obscured these differences. I will discuss how adopting a mereotopological perspective has helped me think in new ways about verb semantics, illustrating with examples from Marín and McNally (2011) and McNally and Spalek (to appear). Finally, I will make some provocative suggestions concerning the relation between work in verb syntax and these two perspectives on verb meaning. 


2021


December 3, 2021—12:00 p.m.
TBA

Lattimore 513

Ailis Cournane
New York University



November 19, 2021—12:00 p.m.
Learning speaker- and addressee-centered demonstratives in Ticuna

Lattimore 513

Amalia Skilton
Cornell University

Children acquiring English, Turkish, and Mandarin produce demonstrative words, such as this/that and here/there, very early in development – but do not display adult-like use or comprehension of the items until very late (Clark & Sengul 1978, Tanz 1980, Kuntay & Ozyurek 2006).  Children’s late mastery of demonstratives is typically attributed to their cognitive bias toward egocentrism, predicting that addressee-proximal demonstratives (that near you) will pose an even greater challenge for learning than the speaker-proximal (this near me) and speaker-distal (that far from me) demonstratives of English.

To test this prediction, I investigate the learning of addressee-proximal vs. speaker-centered (proximal and distal) demonstratives by 45 children, aged 1;0 to 4;11, acquiring Ticuna (isolate; Brazil, Colombia, Peru). Within this sample, no age group of children displayed adult-like use of the Ticuna addressee-proximal demonstrative. One- and two-year-olds did not produce the addressee-proximal at all, instead relying exclusively on speaker-centered demonstratives. Three- and four-year-olds did produce the addressee-proximal, but their production remained non-adult-like: they used the addressee-proximal less than adults, and speaker-centered items more. The results support egocentrism as an explanation for the late mastery of demonstratives, and indicate that this cognitive bias can inhibit the learning of even extremely high-frequency words.

The event poster.



October 15, 2021—12:00 p.m.
Data management for linguists: maintaining organization, collaborations, and accessibility for yourself and the community

Lattimore 513

Kate Lindsey
Boston University

In this talk, I will discuss my corpus of the Ende language and other Pahoturi River languages of Papua New Guinea that I created based on the fieldwork that I did for my dissertation (Lindsey 2019). I will reveal how I developed and maintained the three important criteria for evaluating archival collections (accessibility, quality, quantity) and how this has yielded a broad research program with collaborations in sociolinguistics, typology, and linguistic structure.

The event poster.



April 30, 2021—12:00 p.m.
TBA

Via Zoom (registration required)

Jon Ander Mendia
Cornell University

Register Online



April 23, 2021—12:00 p.m.
TBA

Via Zoom (registration required)

Felix Ameka
Leiden University

Register Online



April 2, 2021—0:00 a.m.
TBA

Via Zoom (registration required)

Caroline Heycock
University of Edinburgh

Register Online



February 12, 2021—12:00 p.m.
How to Become a Direct/Inverse Language

Via Zoom (registration required)

Will Oxford
University of Manitoba

In a “direct/inverse” alignment system, the agreement morphology that indexes a particular nominal is determined by the nominal’s rank on the person hierarchy rather than by its grammatical function. Algonquian languages are often seen as the prototypical example of such a system, but from a diachronic perspective, the Algonquian direct/inverse system is not particularly old: internal and external evidence both point to a reconstructed ancestor in which the agreement morphology shows a simple nominative/accusative alignment pattern. So where did the direct/inverse pattern come from, and how did it quickly gain such a pervasive role in the agreement system? In this talk, I will outline the answers to these questions and argue that they lead us to a simple understanding of direct/inverse alignment in Algonquian: inverse marking appears whenever two adjacent agreement slots are linked to the same argument, or, in terms of generative theory, whenever two probes agree with the same goal. This approach places inverse marking in the same family of “vanishing phi” phenomena as spurious clitic forms in Romance and disappearing agreement markers in Bantu.

Register Online



February 5, 2021—2:00 p.m.
Relating universal quantifiers and information structure in Besemah

Via Zoom (registration required)

Bradley McDonnell
University of Hawai’i at Manoa

This presentation describes universal quantification in Besemah, a little-known Malayic language of southwest Sumatra, and how the syntactic position of the quantifier relates to grammatical relations and information structure. Given previous descriptions of the relationship between quantifiers and grammatical relations, especially in western Austronesian languages, Besemah presents a unique system of universal quantification wherein adverbial universal quantifiers place severe restrictions on which arguments can be quantified. I argue that these restrictions are fundamentally different from those described as 'quantifier float' in other languages, but they are not incidental. Instead, I suggest that the adverbial universal quantifier also marks information structural properties of the clause in Besemah.


2020


October 30, 2020—12:00 p.m.
Natural language without semiosis

Zoom (email linguistics@rochester.edu for link)

Omer Preminger
University of Maryland

This talk aims to show that the atoms of linguistic composition are not Saussurean signs (viz. arbitrary pairings of form and meaning; Saussure 1916, Hjelmslev 1943).

Setting aside ideophones and cases of onomatopoeia, most modern approaches to linguistic theory take it as a given that the atoms of morphosyntactic composition – be they ‘words’ or morphemes – are form- meaning pairings (which can be associated with additional, sui generis syntactic features). I will argue that this is incorrect: architecturally speaking, natural-language expressions are entirely devoid of Saussurean signs (with the possible exception of monomorphemic utterances like “wow!”, “ugh”, and the like).

I will argue in favor of a grammatical architecture where atoms of linguistic composition are entirely abstract, and are not directly associated with form or with meaning. Instead, these atoms, once syntactically arranged, constitute the input to a set of mapping rules to form, and to a separate set of mapping rules to meaning. These mapping rules are many-to-one rules and, importantly, nothing forces the set of atoms that map onto a particular element of form to also map, as a set, onto a particular element (or elements) of meaning. In fact, the input sets to form and to meaning can stand in all manner of misalignment, including what I term proper partial overlap, an illustration of which is given in (1), and an example of which is given in (2):

  1. abstract demonstration of proper partial overlap:
    1. SYNTAX: [x, [y, z]]
    2. SEMANTICS:
      1. {x} → A
      2. {y, z} → B (descriptively, we are used to calling B an “idiom”)
    3. MORPHO-PHONOLOGY:
      1. {x, y} → R (descriptively, we are used to calling R a “suppletive fusional exponent”)
      2. {z} → S
  2. concrete example of proper partial overlap:
    1. SYNTAX: [PAST, [GO, OFF]]
    2. SEMANTICS:
      1. {PAST} → “before now”
      2. {GO, OFF} → “explode”
    3. MORPHO-PHONOLOGY:
      1. {PAST, GO} → /wɛnt/
      2. {OFF} → /ɑf/

The expression in (2) is composed of smaller parts, both in terms of its semantics (“before now”, “explode”), and in terms of its morpho-phonology (/wɛnt/, /ɑf/). It would therefore be incorrect to claim that (2), as a whole, constitutes an ‘arbitrary’ pairing of form & meaning. At the same time, there is nothing else in (2) that constitutes a pairing of form & meaning, either – only pairings of abstract syntactic nodes with meaning (2.b.i‑ii), and separate, incommensurate pairings of abstract syntactic nodes with form (2.c.i‑ii). Thus, (2) involves no Saussurean signs whatsoever.

I will show that empirically, cases of proper partial overlap abound, as do other types of cases predicted by the proposed architecture. Lastly, I will argue that even those contemporary linguistic frameworks that distance themselves from outright Saussureanism, such as Distributed Morphology (Halle & Marantz 1993, 1994) and Nanosyntax (Starke 2009, Caha 2009, 2019), retain certain Saussurean vestiges that render them less explanatory than the current proposal.

The event poster.
View the PDF


October 16, 2020—12:00 p.m.
The lexical and compositional semantics of distributivity

Zoom (email linguistics@rochester.edu for link)

Lelia Glass
Georgia Tech

Some predicates are distributive (true of each member of a plural subject: if two people smile, they each do). Others are nondistributive (if two people meet, they do so jointly rather than individually), or go both ways: if two people open a window, perhaps they each do so (distributive), or perhaps they do so jointly but not individually (nondistributive).

This paper takes up the rarely-explored lexical semantics question of which predicates are understood in which way(s) and why, presenting quantitative evidence for predictions about how certain features of an event shape the inferences drawn from the predicate describing it. Causative predicates (open a window), and predicates built from transitive verbs more generally, are shown to favor a nondistributive interpretation, whereas experiencer-subject predicates (love a movie) and those built from intransitive verbs (smile) are mostly distributive. Turning to the longstanding compositional semantics question about how distributivity should be represented semantically, any such theory ends up leaving much of the work to lexical/world knowledge of the sort that this paper makes explicit.

The event poster.
Click to View PDF

2019


December 13, 2019—2:00 p.m.
The pragmatics of (non-)exhaustivity in questions

Lattimore 513

Morgan Moyer
Rutgers University

The status of the non-exhaustive reading of questions has sparked much debate among researchers studying the semantics of questions. On the one hand, it appears constrained by the linguistic form of the question, and yet it appears independently licensed by non-exhaustive discourse goals. In this talk, I present a series of studies that systematically investigate the factors giving rise to non-exhaustive readings of embedded questions.

In the first experiment, I explore the linguistic constraints on non-exhaustivity, fully-crossing several surface-level cues discussed in the literature to explore how necessary they are to the reading. In the second experiment, I pit linguistic form against discourse goals and find that while form matters, both exhaustive and non-exhaustive readings are also modulated by discourse goals.

Finally, I present the results of a corpus study using the British National Corpus, investigating all occurrences of root and embedded questions, to better understand the force of these cues. We coded questions according to the relevant set of factors identified in Experiment 1 and quantified the link between linguistic cues and interpretation by asking (given an assertion of a particular know-wh question form), what is the most likely/acceptable answer (exhaustive or non-exhaustive) that a hearer could have given? I found that while the corpus data might validate some of the intuitions in the literature about question form, the data suggest that participants were not actively recruiting the structure available from their experience.

Combined, the results of these studies suggest interpreting (non-)exhaustivity is not a matter of a coarse-grained distinction between semantics versus pragmatics, at least as traditionally construed. Rather, it appears that interpretation arises from a hearer calculating how best to resolve the speaker’s missing information given what they can infer about the speaker's goal, and how the speaker posed the question in a given context.



April 12, 2019—2:00 p.m.
A Theory of Kinds for Generics?

Dewey 2-110E

Bernhard Nickel
Harvard University

See the event poster for more information.



February 22, 2019—12:30 p.m.
Exploring crosslinguistic variation in the expression of grammatical categories: an Amazonian case study

Humanities Center Conference Room D, Rush Rhees Library

Adam Singerman
University of Chicago

This talk presents results from an ongoing research project to document and analyze the grammar of Tuparí, an understudied Amazonian language spoken in Brazil by approximately 350 people. I focus in this talk on the language's system of negation and its relationship to other clausal phenomena, including TAME and non-finite embedding.

Negation in Tuparí is an exclusively nominal category: verbs must enter into a nominalized form to accept the negator -'om and must undergo a subsequent process of reverbalization so as to combine with tense and evidential morphology. These morphological processes leave -'om in a low position in the clause, buried underneath multiple levels of category-changing affixation. In keeping with the low structural position of -'om, the same negative strategy known from finite matrix clauses appears in non-finite embedded constructions as well.

Tuparí demonstrates that negative phrases exhibit more crosslinguistic variation than standardly assumed: they may appear in either the nominal or verbal Extended Projections. This finding provides support for the idea that nominal and verbal syntactic domains can parallel one another in complexity and articulation.



February 18, 2019—10:30 a.m.
TBA

Humanities Center Conference Room D, Rush Rhees Library

Dustin Chacón
The meaning of an exception
University of Minnesota

Syntactic theory and psycholinguistics both share the goal of describing how grammatical form is represented in the mind. However, these disciplines have progressed independently. In this talk, I show that sentence processing data can be used to productively constrain syntactic theory, when seen through the lens of a clear linking hypothesis. I examine filler-gap (movement/A') dependencies, which have a well-described profile in sentence processing, and appear to be guided by syntactic principles (Stowe 1986; Phillips 2006; Yoshida, Kazanina, Pablos, Sturt 2014). In (3), the filler dependency appears to resolve with a "resumptive" pronoun inside of a syntactic island, which is surprising since they typically resolve with a gap. I argue that resumption is perceived to be licensed when typical filler-gap dependency processing mechanisms falter, which then enables comprehenders to violate syntactic constraints to build a coherent interpretation (Chacón 2015, 2018, under review). Second, I examine filler-gap dependencies that cross into adjuncts, (4). Adjuncts are typically thought to be islands, (2; Huang 1982; Chomsky 1986; Uriagereka 1999), but extraction from some non-finite adjuncts are perceived to better than others (Truswell 2007; 2011). In a series of studies, I show that the processing profile for these configurations do not show the same profile as well-formed filler-gap dependencies (Kohrt, Sorensen, Chacón 2018; Kohrt, Sorensen, O'Neill, Chacón 2019; Kohrt & Chacón 2019). I argue that these sentences are not syntactically well-formed, but should instead be analyzed as 'repair' of an ungrammatical sentence motivated by interpretation, similar to resumption. On my account, therefore, we do not need to complicate the theory of islands to permit resumption or adjunct extraction. Moreover, these results suggest that semantic information may intervene to rescue a syntactically ill-formed sentence in sentence processing (e.g., Kim & Osterhout 2005), and in the informal acceptability judgments that syntacticians use for data collection.

(1) This is the apple that Ernie ate ___ .

(2) *This is the apple that Chris ran [Adj while Ernie ate ___ . ]

(3) ?This is the cat that Chris said [NP its owner] is a linguist.

(4) ?The linguist bought the apple that the cat laid around [Adj eating __ ] 



February 15, 2019—12:30 p.m.
A Syntactic Side of Word Formation

TBA

Asia Pietraszko
University of Connecticut

The study of word formation has addressed two questions. First, we ask about the nature of word building, i.e. about the mechanism(s) involved in putting two morphemes together. The second question concerns the choice between synthesis versus periphrasis: how is it determined whether two morphemes form a single word or two separate words, in cases when such an alternation is possible? In current Minimalist/Distributed Morphology models, the two aspects of word formation have been unified as two sides of the same coin. Specifically, synthesis is viewed as successful application of a word building operation, while periphrasis arises due to the absence or a failure of such an operation. Despite the conceptual appeal of this unified theory, I will argue in this talk that it is incorrect. Periphrasis cannot be seen simply as the absence of word building. The argumentation is based on a crosslinguistic study of the expression of V and T, which may be expressed synthetically or periphrastically (in English, periphrastic expressions include compound tenses and do-support). I present the following three arguments against the hypothesis that periphrasis is a failure of word building: i) successful word building and periphrasis can cooccur, ii) the failure of word building need not result in periphrasis, and iii) the units created by periphrasis and by word building mechanisms need not overlap. I further argue that periphrasis is a syntactic phenomenon, triggered by featural complexity of clausal syntax. This view derives crosslinguistic generalizations about periphrasis triggers — generalizations missed in the unified theory. The conclusion that periphrasis is independent of word building allows us to reconcile the evidence for its syntactic nature with different approaches to word building. It is compatible with word building mechanisms that apply in syntax (e.g. Head Movement), postsyntax (e.g. Lowering) or in a designated computational module, such as the Lexicon.



February 11, 2019—10:30 a.m.
From syntax to postsyntax and back again

Humanities Center Conference Room D, Rush Rhees Library

Martina Martinovic
University of Florida

A fairly widely adopted view of the syntax-postsyntax(PF) interface is that narrow syntactic processes precede any PF processes (Spell-out), meaning that, once a particular domain (commonly called a phase) is spelled out, it is no longer accessible to syntax (Chomsky 2000, 2001, 2004, etc.). This talk presents ongoing research of the interaction between these two modules of the grammar, and proposes that the boundary between them is much more permeable than traditionally assumed. Specifically, I argue that syntax and PF (postsyntax) can be interleaved in such a way that a syntactic phase first undergoes Spell-out, and then participates in further narrow syntactic computation. I provide two pieces of evidence for this claim from the Niger-Congo language Wolof. The first one addresses a phenomenon in which elements that are in the final structure separated by intervening syntactic material undergo vowel harmony (Ultra Long-distance Vowel Harmony; Sy 2005). I show that at the moment of Spell-out the harmonizing elements are in a local configuration, only to be separated by syntactic movement in a later step in the derivation, resulting in a surface opacity effect. The second argument comes from the behavior of the past tense morpheme, which is in one configuration affixed onto the verb and carried along with it up the clausal spine, and in another stranded by the moving verb, exhibiting a Mirror Principle violation. I show that the past tense morpheme is affixed onto the verb in postsyntax (Marantz 1988, Embick & Noyer 2001), and that the syntax/postsyntax interleaving explains its variable position. The architecture of the grammar in which the syntax and postsyntax interact in a way proposed in this talk predicts precisely these types of surface opacity effects and removes the burden of accounting for them from narrow syntax. This spares us from positing idiosyncratic syntactic operations to account for anomalous phenomena that are in fact the domain of morphology or phonology, and allows us to maintain a view of syntax as cross-linguistically relatively uniform.



February 4, 2019—10:30 a.m.
Negation in Finno-Ugric verb clusters. Evidence for post-syntactic operations

Humanities Center Conference Room D, Rush Rhees Library

Martin Salzmann
University of Leipzig

In my talk I will discuss an intricate syntax-semantics mismatch in the verb cluster of the Finno-Ugric OV-languages Mari and Udmurt and explore its implications for the architecture of grammar. What is remarkable about these languages is that the negative auxiliary that is used to express sentential negation does not occur in the position where one would expect it to given its syntactic and semantic properties: Despite the fact that it has scope over the other verbs in the clause and governs the form of the dependent verb, it systematically occurs in second to last position in the verb cluster, a kind of displacement similar to displaced morphology in German varieties.

I will propose that the surface position of negation arises via a post-syntactic operation that lowers the negative auxiliary onto the structurally closest verb. This correctly predicts that the negative auxiliary forms an (almost) impenetrable unit with the dependent verb and that displacement does not have any semantic effects. In addition, Lowering straightforwardly captures the distribution of inflectional morphology (which attaches to the negation rather than the lexical verb) and the special clitic placement possibilities in negated sentences (unlike in positive sentences, clitics can occur in both cluster-peripheral as well as in cluster-internal position).

In the last part of my talk, I will compare the post-syntactic approach with a lexicalist alternative and conclude with a more general discussion about architectural challenges for linguistic theory.



February 1, 2019—12:30 p.m.
Sociolinguistic approaches to the study of multilingualism and language shift

Humanities Center Conference Room D, Rush Rhees Library

Maya Ravindranath Abtahian
University of Rochester

The rate of language endangerment worldwide is rapid, with linguists currently estimating that 50 – 90% of the world’s languages will be lost in the coming decades (Crystal 2000, Krauss 1992). Moreover, we know that the majority of the world’s population speaks a minority of the world’s languages, with 94% of the world’s languages being spoken by only 6% of the world’s people (Lewis, Simons and Fennig 2013). Correspondingly, the language endangerment literature to date has mostly focused on small language communities, where the language is already clearly moribund with few children speakers, and the investigations are more often locally-oriented, qualitative rather than quantitative, and ethnographic, with a primary focus on the description of languages that may shortly be lost. In this talk I discuss and compare different approaches to the study of language shift and endangerment from a sociolinguistic perspective, focusing on two communities that do not fall into this category. In one (Garifuna, Belize), the speaker population is small but the language still has many fluent speakers, and in the other (Indonesia) the language communities under investigation have speaker populations in the tens of millions. In both cases I argue that these languages are potentially endangered, and that these are the types of communities that we should be investigating in order to better understand the process of language shift.

Following recent trends in variationist sociolinguistics, I argue for an approach to the study of language shift that uses both quantitative and qualitative evidence to gain insight into the progression of language shift and the changes that accompany shift. On the one hand, a “big data” approach to the examination of language shift allows us to see that language shift at the national level can be a communal change that happens in less than a generation. From this perspective we gain insight into the demographic factors that correlate with shift toward the dominant language, including level of urbanization and education, as well as some social factors that are more surprising. At the root of community-level decisions about language choice, however, are the choices that individuals and families make over the course of their own lives, and most notably the central problem of intergenerational transmission (Fishman 1991). At a local level language shift proceeds generationally, and from this perspective we can gain insight into the characteristics of the pivot generation of speakers who push the shift forward. Using these approaches in tandem allows us to “examine what may result from combinations of...how individuals change or do not change during their lives [and] how communities change or do not change over time (Labov 1994:83).



January 25, 2019—2:00 p.m.
Capturing Linguistic Diversity: Grammatical Tone in Gyeli

Dewey 2-110E

Nadine Grimm
University of Rochester

The human capacity for language enables us to create elaborate multimodal systems of communication. The most striking feature of human language is its diversity of form and meaning on every level of communication (Levinson 2014). This diversity, however, is threatened by an ongoing mass extinction of languages and cultures, depriving us of the possibility to investigate the full extent of what the human mind is capable of and build a theory of language that takes all its diverse forms into account.

Couched in a documentary and descriptive framework, I contribute to the understanding of linguistic diversity by investigating a vastly under-studied field in linguistics: grammatical tone. Tone, i.e. pitch modulation, is a feature present in 60-70% of the languages of the world (Yip 2002) that comes in different forms and encodes different types of meaning, for instance, distinguishing lexical meanings or grammatical functions. Yet, most theories of grammar pay scant attention to tone due to a predominant bias towards studying Indo-European languages that lack tonal systems. Further, most literature on tone almost exclusively concentrates on the phonetics and phonology of tone (Goldsmith 1990, Gussenhoven 2004).

Based on empirical data from fieldwork in Cameroon on Gyeli, an endangered Bantu language spoken by "Pygmy" hunter-gatherer, I show the intricate interplay between a tonal system and its grammatical functions. Tone is the primary means in Gyeli to encode a variety of grammatical functions, including encoding seven grammatical tense-mood distinctions and the distinction between realis and irrealis categories, marking syntactic categories such as an object following a verb, forming noun compounds, and expressing deictic distance. I discuss how understanding the range of functions grammatical tone can encode and its interfaces with other parts of the grammar in Gyeli contributes to developing a typology of grammatical tone systems across languages.


2018


November 9, 2018—3:30 p.m.
On Root Suppletion

Lattimore 513 at 3:30 p.m. with reception to follow

Daniel Siddiqi
Carlton University

This talk aims to discuss the role of “root suppletion” in contemporary morphological theory.  The debate around root suppletion is largely about whether it even exists.  This question has proven to be resistant to being resolved as a simple empirical question.  Rather, it has been approached as a taxonomical question:  what are the criteria we have for considering a particular stem alternation to be suppletive?  Indeed, this is a central debate in the 2014 issue of Theoretical Linguistics--best exemplified in an exchange between Harley (2014a,b) and Borer (2014).  We summarize here Harley (2014a)’s argument from root suppletion, Borer (2014)’s proposal of criteria for assessing the existence of root suppletion. and Harley’s (2014b) response.

In this talk, we contribute to this debate by arguing that the search for root suppletion is better defined as “the search for counter-evidence to the L-node Hypothesis and the Early Root Insertion Hypothesis.” The L-node Hypothesis a prominent hypothesis in Distributed Morphology that claims that Roots are not individuated in the syntax (see for example Harley & Noyer 2000; Marantz 1996, Harley 1995, Harley & Noyer 1999, 2000).  The Early Root Insertion Hypothesis is a competing hypothesis in DM and similar models (such as Exo-Skeletal Syntax) that claims that Roots are individuated phonologically (see for example Embick 2000 et seq, Embick & Noyer 2007, Borer 2013).  Counter-evidence to these to hypotheses are taken to support the Root Competition Hypothesis, where Roots are individuated through some other means and participate in competition for Vocabulary Insertion (see for example Siddiqi 2009, Chung 2009, Harley 2014, Haugen & Siddiqi 2013, 2016).  Once we articulate the search in these terms, Borer’s (2014) criteria can be re-articulated such that they clearly show that the Uto-Aztecan data presented in Harley (2014) Haugen & Siddiqi’s (2013) clearly do not meet that criteria for falsifying these hypotheses.

Given the above, we present two sets of data from Ojibwe that appear to be root suppletion.  We argue that one set of data fails to meet these criteria, thus illustrating the importance of the criteria.  Meanwhile another set clearly and unambiguously meets the criteria, providing the sought after counter-evidence.  In this context, the question of sample size becomes relevant (as Borer 2014 argues).  How many examples of this type of root suppletion constitutes enough to falsify?  We argue that this is a metatheoretical question that needs to be evaluated against concerns of elegance, economy, and parsimony.  Finally, we argue that the types of root suppletion that are presented in Harley (2014a,b) serve an important function despite failing to falsify competing hypotheses.  We argue that are evidence confirming crucial predictions of the Root Competition Hypothesis.



October 12, 2018—2:00 p.m.
Legal Interpretation by Big Data

Lattimore 513 at 2 p.m.

Lawrence M. Solan
Brooklyn Law School

Over the past eight years, legal analysts have begun using linguistic corpora in statutory and constitutional interpretation. Judges often presume that the legislature intends the words in laws to be understood in their ordinary sense. Similarly, the originalist movement in constitutional interpretation search for what they call "original public meaning" of terms in the Constitution. By this they mean the meaning that an educated person, living at that time, would typically assign to the term. A typical illustration is that "domestic violence" probably meant "armed insurrection" rather than "spousal abuse" at the time, and therefore, the Constitution should be understood accordingly.

Using the BYU corpora COCA (Corpus of Contemporary American English), COHA (Corpus of Historical American English), and a new corpus of 18th century American English. Analysts have been drawing inferences of legislative intent based on the relative frequency of one sense over another in the relevant corpus.

This practice has generated concern among some writers. I am among them. Linguist Tammy Gales and I have been attempting to establish criteria for the use of corpus analysis being most efficacious. Among them are appropriate reliance on the legal decision that ordinary meaning should prevail; an understanding of what makes ordinary meaning ordinary; knowing what search to conduct; avoiding excessive inferences from the absence of a sense in the relevant corpus; and choosing the right corpus. We have also begun to expand corpus analysis from reliance on relative frequency to making use of an analysis of collocates, among other corpus linguistic tools.

Others have argued that for contemporary laws, experimental survey studies may be more efficacious than corpus analysis. The presentation discusses this and other potential limitations of corpus methodology in legal interpretation.



February 16, 2018—9:30 a.m.
Syntactic projection and distribution

Rush Rhees Library, Humanities Conference Room D

James Blevins
University of Cambridge

In the modern period, there has been a tension between two ideas about the locus of syntactic relations. In the earliest syntactic models, syntactic relations were taken to hold between realized syntactic dependents. In subsequent elaborations of this tradition, a variety of relations were shifted onto abstract elements of the argument structures associated with verbs and other predicates. Work over the past half century has shown the descriptive value of notions of participant roles and grammatical relations for the analysis of phenomena such as valence alternations. However, there have also been overextensions of these notions in analyses of patterns, such as those involving adjuncts, where the application of abstract models of argument structure and argument selection has been less fruitful. In part, the limitations of these models reflects the fact that they lack distributional information, which can be associated with dependents, and to which speakers are known to be sensitive. This talk outlines the basis for a contemporary synthesis, which aims to consolidate the insights of models based on dependents and argument structures, and suggests how some of these insights can be carried forward into the emerging big data era.



February 11, 2018—11:45 p.m.
THE PROPER APPROACH TO DEFINITE ARTICLES

Rush Rhees Library, Humanities Conference Room D

Ora Matushansky
SFL (CNRS/University Paris-8)/UiL OTS/Utrecht University

Definite articles in proper names (the so-called proprial articles) as in (1) have been treated either as expletive (Longobardi 1994, etc.) or as regular (Sloat 1969, Anderson 2003 et seq., Matushansky 2008, etc.). In this talk I reconcile the two views by arguing that all definite articles are merely a formal reflection of a semantic definiteness feature located elsewhere in the structure.

(1) a. the Hudson, the Bronx, the Netherlands, the Empire State Building b. la France, les Etats -Unis, (*le) Rochester, la Tour Eiffel

Initial evidence for this proposal comes from the distribution of proprial articles. First of all, in German, the definite article only appears with proper names that are specified for some phi-feature, such as number or gender; the same is true in other languages. Similarly, in Romanian proprial articles can be overt depending on case, hence the semantics does not come into the picture. Secondly, I will discuss non -restrictively modified proper names (2) and argue that their compositional semantics requires an iota operator below the modifying adjective. Thirdly, the proprial article may be overt in function of whether the denotation is a location vs. an object (e.g., in French).

(2) the inimitable Stravinsky

Extending the analysis to common NPs, several welcome consequences follow. The phenomen a of definiteness spreading in Semitic (3) and double definiteness in Scandinavian (4) receive straightforward explanations. The variable realization of the definite article in kind names (5), (6) can be treated as a formal rather than semantic parameter ( pace Chierchia 1998). Definite articles with pronouns (7) and singleton-reference NPs (8) are more naturally explained, as are cases of pronominal determiners (9).

(3)ha-baxura ha-intelligentit Hebrew DEF-girl DEF-intelligent the intelligent girl

(4) den hungriga mus.en Swedish DEF hungry mouse.DEF the hungry mouse

(5) a. rice, beans English b. the impossible, the rich (6) *(le) riz, *(les) haricots French

(7) Ka kite au i a ia. Maori T/A see 1SG DO DEF 3SG I saw him.

(8)the best answer, the only solution, the first proposal, the king

(9)we linguists, you guys

I will then argue that this proposal, coupled with the view of definiteness as a formal feature, provides further insights into the syntax of phi-features in general (cf. Sauerland?s work).

Selected references

Anderson, John M. 2003. On the structure of names. Folia Linguistica 37, pp. 347-398.

Chierchia, Gennaro. 1998. Reference to kinds across languages. Natural Language Semantics 6, pp. 339-405.

Longobardi, Giuseppe. 1994. Reference and proper names. Linguistic Inquiry 25, pp. 609 - 665.

Matushansky, Ora. 2008. On the linguistic complexity of proper names. Linguistics and Philosophy 31, pp. 573-627.

Sloat, Clarence. 1969. Proper nouns in English. Language 45, pp. 26-30.



February 9, 2018—9:30 a.m.
Retrieving antecedents with the grammar: c-command, D-type pronouns and Phi-features

Rush Rhees Library, Humanities Conference Room D

Keir Moulton
Simon Fraser University

Since Reinhart 1976, it has often been claimed that bound variable pronouns are subject to a c-command requirement. This is not obviously the case, however (see Bach and Partee, 1980, et seq.), and was challenged recently by Barker 2012, who argued that bound pronouns must merely fall in the semantic scope of a binding quantifier, a configuration that does not always implicate syntactic c-command. In the processing literature, recent results have been advanced in support of c-command (Kush et al. 2015, Cunnings et al. 2015). However, none of these studies separates semantic scope from c-command. In this talk I will report the results of Moulton and Han (to appear) which show that when we put both c-commanding and non-c-commanding quantifiers on equal footing in their ability to scope over a pronoun, there is nonetheless a processing difference between the two. The results establish that c-command, not scope alone, is relevant for the processing of bound variables. In particular, the experimental findings show that co-varying but non-c-commanded pronouns (often called E-type or D-type pronouns) are processed without difficulty but do not exhibit gender mismatch effects (GMMEs) as c-commanded pronouns do. I propose an account of these results, along with other experimental findings, that combines an idea about the grammar (that variable binding encodes phi-features in a way suggested by Sudo 2012) and a well-motivated assumption about the processor (that antecedent retrieval relies on a content addressable memory, Lewis et al. 2006).



February 1, 2018—12:30 p.m.
Enriched Meanings

Rush Rhees Library, Humanities Conference Room D

Ash Asudeh
University of Oxford & Carleton University

Language can be understood abstractly as a mapping between form and meaning. Linguistic theory uses structures (basic elements and relations between them) to represent form (phonology), meaning (semantics) and the structures mediating them (syntax). Mappings between different linguistic representations (interfaces) can therefore be understood as mappings between structures. Category theory is a branch of mathematics that is concerned with such structural mappings, and is therefore well-placed to help us understand how linguistic structures may, when necessary, be mapped to more complex structures. In this talk, I will present a research program developed together with Dr. Gianluca Giorgolo, in which we apply category theory and, in particular, the concept of monads, to problems in semantics and pragmatics. We call this research program "Enriched Meanings", as the monadic formalism allows us to enrich standard semantic interpretation in order to capture certain complex phenomena. I will present our work on conventional implicatures, conjunction fallacies, and substitution puzzles, paying particular attention to the last case and focusing on the intuitions behind the approach, rather than on the formal details.



January 28, 2018—11:45 a.m.
On the interaction between syntax and postsyntax in Uzbek (non-)verbal predicate formation

Rush Rhees Library, Humanities Conference Room D

Vera Gribanova
Department of Linguistics, Stanford University

In this talk, I present novel evidence from Uzbek verbal and non-verbal predicate formation that bears on the nature of the interaction of several distinct word formation processes?this includes inversion/infixation, phonological support for stranded affixes, and a form of morphological merger. I first demonstrate that despite initial surface similarities, verbal and non-verbal predicates in Uzbek employ distinct word formation strategies along several parameters. I show that the formation of both verbal and non-verbal predicates can be analyzed in terms of the interaction between several syntactic and post-syntactic mechanisms, including head movement, Local Dislocation (Embick and Noyer 2001), merger under adjacency (Bobaljik 1994), and a phonological support mechanism akin to English do-support.

The theoretical landscape represented by Minimalism and Distributed Morphology leads us to expect that differences in the application of these operations in verbal vs. non-verbal predication should stem at least in part from differences in the underlying syntactic structures of these constructions. In the last part of the talk, I demonstrate that it is possible to derive the distinct verbal vs. non-verbal predication strategies from a basic organizing principle of the syntax of Uzbek, namely the availability of syntactic head raising for verbal, but not non-verbal, predicates. Evidence in favor of this syntactic claim is drawn from novel paradigms involving verb-stranding ellipsis in Uzbek.

The presentation is based on work in progress, and forms the foundation for a planned micro-comparative study of the syntax and postsyntax of predicate formation in the Central Asian Turkic languages.

References

[1] Bobaljik, Jonathan. 1994. What does adjacency do? MITWPL 22:1.
[2] Embick, David and Rolf Noyer. 2001. Movement operations after syntax. Linguistic Inquiry 32(4): 555-595.
[3] Harley, Heidi. 2013. Getting morphemes in order: Merger, Affixation and Head-movement. Diagnosing Syntax, ed. L. Cheng & N. Corver. OUP.



January 25, 2018—12:30 p.m.
Regionalizing race: Exploring Sound Change and Racial Identity

Rush Rhees Library, Humanities Conference Room D

Sharese King
Department of Linguistics, Stanford University

Linguists have problematized the presentation of African American English (AAE) as a uniform variety (Wolfram 2007; Yaeger-Dror & Thomas 2010). Amid growing evidence of regional variation, linguists have cautioned against the homogenization of African Americans? linguistic practices and identities (Wolfram 2007; Childs 2005). Despite advances in our understanding of how the dialect varies, there is a dearth of research focusing on why African Americans? speech varies. My work advances the discussion by examining social and linguistic diversity across African Americans? speech. I examine how linguistic heterogeneity can arise from differences in identity constructions, which are informed by social changes in the community.

In this talk, I draw upon data from my dissertation, an ethnography of Rochester, New York. In order to study social and linguistic diversity in Rochester, I identify example personae particular to that social landscape (Eckert 2012). Specifically, I ask how sound change is enacted through local personae like the mobile black professional and the hood kid. Results indicate that in comparison to other community speakers, mobile black professionals produce significantly lower TRAP tokens and hood kids produce significantly backer BOUGHT tokens. The findings demonstrate that African American language and identity are not monolithic and encourage linguists to reconsider how we define African Americans English.


2017


December 6, 2017—4:00 p.m.
Nonlinear EEG decomposition reveals distinct temporal processes for speech and music

Mel 301B

Nathaniel Zuk & Shaorong Yan
University of Rochester

Please RSVP here: https://goo.gl/forms/uz55t9eu6OlH5R932

Speech and music share similar temporal attributes such as rhythm and phrasing.  Yet we perceive these attributes differently, and it is still unclear how temporal processing in the brain contributes to the differences in perception for speech and music.  Electroencephalography (EEG) contains sufficient temporal precision to study the neural encoding of these types of sounds in humans, but there are still technical hurdles to overcome with this technique.  One key issue is that typical research using simplistic and repetitive stimuli may show results that are hard to extend to a naturalistic context.

Our lab has shown that the neural processing of natural, continuous speech can be studied by using linear models to reconstruct the speech envelope from the recorded EEG.  These models capture correlations between the EEG and the speech stimulus associated with the neural encoding of amplitude fluctuations in the stimulus.  However, when we try to extend this analytical method to musical stimuli, these models fail to reconstruct the envelopes of music.  The failures cannot be explained by envelope statistics, suggesting that the processes responsible for temporally encoding music are obscured in the raw EEG signal.

Here, we demonstrate a modification of our previous method that allows us to study the neural encoding of music with EEG in a naturalistic context.  We decompose the EEG signal into the time-varying amplitude and time-varying phase for delta, theta, alpha, and beta frequency bands.  We then use the decomposed EEG signal to reconstruct the envelope of the stimulus.  We find that delta phase contributes significantly to the reconstruction of the envelopes for speech, while alpha and beta amplitudes contribute significantly to the reconstruction of the envelopes of rock music.  This method fails to reconstruct the envelopes of classical music, potentially due to the differences in rhythmic regularity for the rock and classical music stimuli.  We think that our result points to distinct temporal processes in the brain for encoding speech and music.



November 17, 2017—2:00 p.m.
The Shona Mbira tradition

Lattimore 513

Glenn West
ESM, University of Rochester

Glenn West, an ethnomusicologist and director of the Mbira Ensemble at Eastman will give a lecture and demonstration of the mbira instrument and traditional Shona musical practices. The mbira has been played by the Shona people of Zimbabwe for thousands of years and has deep roots in the culture. West has studied the mbira in Zimbabwe, and will discuss the relationship between the instrument and the Shona culture and language.

https://www.esm.rochester.edu/faculty/west_glenn/



November 7, 2017—4:00 p.m.
Representing concepts as probabilistic programs and learning them from data

Meliora 301B

Matt Overlan, Frank Mollica
BCS Graduate Students
University of Rochester

Matt will be talking about probabilistic program induction as a framework for modeling human concept learning. Probabilistic programs are those that use stochastic functions, and program induction means learning a latent or hidden program from data. He will discuss the advantages and disadvantages of probabilistic programs as compared to other kinds of representations, and he'll show several applications of their use in the literature. Lastly, Matt will show results from a visual concept learning experiment that we ran with adults on mechanical turk, and compare those results to the predictions of our program induction model.

Frank will discuss how probabilistic programs have been used to uncover new insights about linguistic representations (e.g., phonotactics, argument structure), investigate productivity and reuse in language, and model children's early word use.

We’ll be having the second instalment of the CLS Interdisciplinary talk series next week (November 7 @4 in Meliora 301B). Matt Overlan (BCS, advisor: Robbie Jacobs) will be giving the short informal talk about his research, and Frank Mollica (BCS, advisor: Steve Piantadosi) will be leading the discussion. Following the talk we will be convening at the Tap and Mallet.

If you are planning on attending, please RSVP via this form: https://goo.gl/forms/3QDNa1HztoHgt5DP2



November 3, 2017—3:30 p.m.
Prosodic Recursion and Syntactic Cyclicity inside the Word

Lattimore 513

Peter Guekguezian
Department of Linguistics, University of Rochester



October 27, 2017—3:30 p.m.
Return to Richard

Lattimore 513

Ash Asudeh
University of Oxford, Centre for Linguistics



October 11, 2017—4:00 p.m.
New Interdisciplinary CLS Talk Series: Predictive Eye-movements and Roles of Visual and Linguistic stimuli

301B Meliora Hall

Woon Ju Park, Chigusa Kurumada
Postdoc with Duje Tadin; and Assistant Professor
Brain and Cognitive Sciences, University of Rochester

In this meeting, Woon Ju Park (BCS, CVS) will tell us about her dissertation research about predictive eye-movements in a visual perception task. She tested a population with Autism Spectrum Disorder and a control group of Neuro-typical adults and found that they show different saccadic eye-movements within a trial and they also differed in their learning behaviors over the course of the experiment. Her results provide particularly interesting food for thought in regards to general questions about:

  • how do we make predictive eye-movements based on visual, and auditory, stimuli?
  • how do we accumulate relevant statistical information to improve our prediction accuracy?

Following Woon Ju's research presentation, we are going to have an open discussion about how we can address these questions from a interdisciplinary perspective. Chigusa Kurumada (BCS, CLS) will provide a brief overview of predictive (anticipatory) eye-movements typically discussed in psycholinguistic research and moderate discussion about how vision research can inform language research and vice versa.

RSVPs are requested for determining room size / snacks (to apogue@ur.rochester.edu). An informal social gathering will follow at Swiftwater Brewery (378 Mt Hope Ave).



September 11, 2017—3:30 p.m.
Hitting a point and wiping a region: The argument realization of manner verbs

Lattimore 513

Beth Levin
Department of Linguistics, Stanford University

As Fillmore and others have observed, verbs with similar meanings often show characteristic argument realization patterns, that is, shared patterns of morphosyntactic distribution. This observation has suggested that these patterns follow from common facets of meaning (Fillmore 1971, Levin & Rappaport Hovav 1995, Pinker 1989), attributed largely to the verb's 'root'. This proposal is challenged by observations that verbs actually are found in a wide variety of syntactic contexts, suggesting that they can simply be inserted into any syntactic context and their roots do not have a 'say' in the matter. On this approach, unacceptable root-syntactic context combinations are ruled out due to an incompatibility between the two (Acedo-Matellan & Mateu 2013, Borer 2003, Goldberg 1995, Hoekstra 1992, Mateu & Acedo-Matellan 2012). Such incompatibilities are often explained by appeal to real world knowledge, but details remain to be fleshed out.

I revisit this challenge in the context of recent work on the semantic underpinnings of argument realization. I acknowledge that the empirical landscape is more complex than studies of argument realization in the '90s assume, but I show that verbs nevertheless display significant semantic class specific distributional patterns. I take these patterns as a reason to still pursue an account where the verb's root contributes to determining its argument realization options.

First, I review the well-known, systematic asymmetries that involve what have been called manner vs. result verbs, exemplified by hit and break, respectively (Rappaport Hovav & Levin 2010). Then, I turn to less well-known, but equally systematic asymmetries between two types of manner verbs represented by the verbs hit and wipe. The break/hit asymmetries have been used to support the proposal that roots come with a grammatically relevant ontological type. I further argue that some manner roots select for an 'argument' (cf. the 'constant participant' of Levin 1999), and that hit and wipe impose different demands on such an argument. Informally, wipe requires it to be a 'region' and hit a 'point'.

I propose that the distribution of roots and, hence, verbs, across syntactic contexts is determined by a cluster of interacting factors, including the ontological type of the root. The diversity of syntactic contexts that many verbs are found in can largely be attributed to the expression of three major types of events of scalar change (Hay, Kennedy & Levin 1999). Further, as suggested in RH&L (2010), the argument that a scalar change is predicated of must be realized as an object. As RH&L discuss, this requirement is the source of distributional differences between break vs. hit/wipe. I argue that further distributional differences reflect the nature of the scalar change involved, especially among the hit/wipe verbs. I trace the differential syntactic behavior of wipe and hit to the distinct types of 'argument' their roots require, which in turn results in wipe, but not hit, having an object which is a potential incremental theme. Finally, I consider which facets of world knowledge might further constrain the attested argument realization options from among those that the more 'grammatical' c



May 9, 2017—10:30 a.m.
Probabilistic prosody: Context effects and perceptual recovery of (supra)segmental linguistic information

301B Meliora Hall

Laura Dilley, Ph.D
Associate Professor
Michigan State University

It has been proposed that the brain is a complex prediction engine which attempts to minimize prediction error through adaptive recapitulation of a signal source and comparison with incoming sensory information. Context effects are well-known in perception, but context effects due to prosody, i.e., rhythm, pitch, and timing, are relatively under-studied. In this talk I discuss how context prosody provides a strikingly robust basis for prediction of linguistic content, structure, and use in sometimes surprising ways. It is argued that predictions enabled by context prosody are crucial to understanding the speech chain from speaker to listener. Moreover, it is argued that examination of individual differences in sensitivity to context prosody can provide a window into mechanisms for language perception, including the extent to which mechanisms may be domain-specific, i.e., uniquely dedicated to processing language, as opposed to domain-general. The speech signal is often highly ambiguous and underdetermined with respect to phonetic and lexical content and structure, and context prosody imposed by the speaker is argued to be a critical piece to the puzzle for understanding how listeners develop accurate neural predictions about a speaker's intended message.



April 7, 2017—8:15 a.m.
Symposium on American Indian Languages (SAIL)

Room 1829 & Alumni Room located in the Student Alumni Union (SAU/004) building, Rochester Institute of Technology

The Symposium on American Indian Languages (SAIL) is dedicated to discussion of the documentation, conservation and revitalization of the native languages of the Americas.

SAIL also provides a forum for the exchange of scholarly research on descriptive and/or theoretical linguistics focusing on American Indian languages.

SAIL brings together scholars, members of the indigenous communities, native speakers, educators and language activists who are interested in sharing experiences and best practices on topics related to language documentation, conservation and revitalization.

Building on RIT's rich history of educational outreach to Native American communities, SAIL welcomes the active participation of indigenous communities, native language speakers, and those interested in revitalization and preservation of their heritage languages and cultures.

The theme for this year's SAIL is “Language Revitalization Strategies in the Americas: Challenges, Success and Pitfalls”.

Visit the SAIL website for more details.



April 5, 2017—9:00 a.m.
The grammaticalization of the iterative marker -n?- in Navajo

Lattimore 513

Jalon Begay
Navajo Language Program & Department of Linguistics University of New Mexico

The complex and puzzling nature of the Athabaskan verb has challenged and fascinated scholars for more than a century. The verbal morphology has been described as having unpredictable inflectional and derivational prefixes that are motivated by ?nonlocalized? dependencies within a larger templatic composition (Rice 2000: 1, 9). When observed synchronically, the unpredictability and irregularity cannot be described as a linear concatenation of affixes to a verb stem (root + aspectual suffixes). What we find are a wide and varied range of fixed, discontinuous sets of prefixal strings that combine with optional prefixes, which seem to be inserted as required. From a synchronic perspective, the elements basically undermine any fruitful analyses that stipulate syntactic derivation and grammatical or semantic scope (cf. Mithun 2000: 236). This paper attempts to reconcile the synchronic facts of the templatic morphology with language change processes that are well known in grammaticalization theory (see, e.g. Hopper & Traugott 2003[1993]; Lehmann 2015). In particular, the analysis focuses on the Navajo iterative marker -n?-. Navajo is known for complex allomorphy and homophony found among inflectional and derivational prefixes. It is often assumed and noted that such morphology are unrelated, coincidences arrived at via phonological processes (Kari 1989). However, with a closer inspection, one will find many examples of semantic extension and radially structured categories (e.g. Lakoff 1987; Panther and Thornburg 2001), as shown in (1).

(1)

a. Iterative aspect
T??? ??kw??b?n? gohw??h n?shdlįį́ h́
. t??? ??kw??-b?n? gohw??h n?-?-sh-d-lįį́h́
PART every-morning coffee ITER-3OBJ-1SUBJ-VL-drink.USIT
?I (usually) drink coffee every morning.?

b. Semeliterative aspect
N??sh?dl??zh.
n?-?-si-sh-?-dl??zh
SEM-3OBJ-ASP-1SUBJ-VL-paint.PERF
?I painted it again.? (or ?I repainted it.?)

c. Reversionary aspect
Hooghandi n?dz?.
hooghan-di n?-?-d-y?.
home-ENC REV-3SUBJ-VL-go.PERF
?S/he returned home.? (or ?S/he came back home.?)

Since the iterative marker ostensibly overlaps with other aspectual categories and lexical classes, I argue that -n?- is exemplary of grammaticalization pathways (cf. Heine et al. 1991) and what has been termed ?synchronic? grammaticalization (Robert 2004; cf. Craig 1991, for polygrammaticalization). Namely, I show the postpositional sources for the the aspectual phenomena in (1), e.g. -naa (~ naa- ~ na- ~ ne- ~ ni- ~ n-) ?around, in the surrounding? and/or -n? (~ n?- ~ n?- ~ n?- ~ ń-) ?around encircling?.

This study also proposes that much of the polysemous and homophonous forms of the Navajo verb complex can be accounted for by seeking out all probable sources and divergences (cf. Gaeta 2010). Usually, most apparent homophonous morphemes can be sourced back to monosyllabic nouns, verbal stems, or ?preverbal elements.? Lastly, the pathways can extend over several layers of grammaticalization processes. Therefore, commonly a lexical source and its perceptible derivatives (can) co-habitat within a single linguistic period.



February 28, 2017—2:00 p.m.

301B Meliora Hall

Andres Buxo-Lugo
PhD student
University of Illinois at Urbana-Champaign



February 23, 2017—1:00 p.m.

301B Meliora Hall

Eleanor Chodroff
Department of Cognitive Science, Johns Hopkins University



February 2, 2017—3:30 p.m.
Mixed Effects Model Tutorial

301B Meliora Hall

Amelia Kimball
Department of Linguistics, University of Illinois at Urbana Champaign

Mixed effects models are widespread in language science because they allow researchers to incorporate participant and item effects into their regression. These models can be robust, useful and statistically valid when used appropriately. However, a mixed effects regression is implemented with an algorithm, which may not converge on a solution. When convergence fails, researchers may be forced to abandon a model that matches their theoretical assumptions in favor of a model that converges. We argue that the current state of the art of simplifying models in response to convergence errors is not based in good statistical practice, and show that this may lead to incorrect conclusions. We propose implementing mixed effects models in a Bayesian framework. We give examples of two studies in which the maximal mixed effects models justified by the design do not converge, but fully specified Bayesian models with weakly informative constraints do converge. We conclude that a Bayesian framework offers a practical--and, critically, a statistically valid--solution to the problem of convergence errors.



February 2, 2017—1:45 p.m.
Categorical vs. Episodic Memory for Pitch Accents in American English

301B Meliora Hall

Amelia Kimball
Department of Linguistics, University of Illinois at Urbana Champaign

Phonological accounts of speech perception postulate that listeners map variable instances of speech to categorical features and remember only those categories. Other research maintains that listeners perceive and remember subcategorical phonetic detail. Our study probes memory to investigate the reality of categorical encoding for prosody—when listeners hear a pitch accent, what do they remember? Two types of prosodic variation are tested: phonological variation (presence vs. absence of a pitch accent), and variation in phonetic cues to pitch accent (F0 peak, word duration). We report results from six experiments that test memory for phonological pitch accent vs. phonetic cues. Our results suggest that listeners encode both categorical distinctions and phonetic detail in memory, but categorical distinctions are more reliably retrieved than cues in later tests of episodic memory. They also show that listeners may vary in the degree to which they remember prosodic detail.



January 23, 2017—12:00 p.m.
Models of retrieval in sentence comprehension: A computational evaluation using Bayesian hierarchica

301B Meliora Hall

Bruno Nicenboim
PhD Graduate Student
Department of Linguistics of University of Potsdam, German

Research on similarity-based interference has provided extensive evidence that the formation of dependencies between non-adjacent words relies on a cue-based retrieval mechanism. There are two different models that can account for one of the main predictions of interference, i.e., a slowdown at a retrieval site, when several items share a feature associated with a retrieval cue: Lewis and Vasishth's (2005) activation-based model and McElree's (2000) direct access model. Even though these two models have been used almost interchangeably, they are based on different assumptions and predict differences in the relationship between reading times and response accuracy. The activation-based model follows the assumptions of the ACT-R framework, and its retrieval process behaves as a lognormal race between accumulators of evidence with a single variance. Under this model, accuracy of the retrieval is determined by the winner of the race and retrieval time by its rate of accumulation. In contrast, the direct access model assumes a model of memory where only the probability of retrieval can be affected, while the retrieval time is constant; in this model, differences in latencies are a by-product of the possibility of backtracking and repairing incorrect retrievals. We implemented both models in a Bayesian hierarchical framework in order to evaluate them and compare them. We show that some aspects of the data are better fit under the direct access model than under the activation-based model. We suggest that this finding does not rule out the possibility that retrieval may be behaving as a race model with assumptions that follow less closely the ones from the ACT-R framework. We show that by introducing a modification of the activation model, i.e, by assuming that the accumulation of evidence for retrieval of incorrect items is not only slower but noisier (i.e., different variances for the correct and incorrect items), the model can provide a fit as good as the one of the direct access model.


2016


December 9, 2016—3:30 p.m.

Lattimore 513

Abby Cohn
Professor, Linguistics
Cornell University



December 2, 2016—3:30 p.m.
The memory retrieval process in reflexive-antecedent dependency resolution

Lattimore 513

Zhong Chen
Assistant Professor, Department of Modern Languages & Cultures
RIT



October 6, 2016—11:00 a.m.
Singing in tone languages: from mystery to research question(s)

Meliora 366

D. Robert Ladd
Professor, Linguistics and English Language
The University of Edinburgh

Singing in tone languages, a perennial source of mystery to speakers of non-tonal languages, has been the subject of a good deal of research since the turn of the century. This research shows that the solution to respecting both the linguistic (tonal) and musical functions of pitch crucially involves text-setting constraints. Specifically, in most of the dozen or more Asian and African tone languages where the question has been studied, the most important principle in maintaining the intelligibility of song texts seems to be the avoidance of what we might (hijacking a term from music theory) call "contrary motion": musical pitch movement up or down from one syllable to the next should not be the opposite of the linguistically specified pitch direction. I will review some of the empirical evidence for the basic constraint from recent research, and will discuss differences between languages and musical genres in such things as how strictly the constraint is observed. I will also briefly consider two more general issues: (1) how tonal text-setting might be incorporated into a general theory that includes traditional European metrics, and (2) what (if anything) the avoidance of contrary motion tells us about the phonological essence of tonal contrasts.



October 4, 2016—11:00 a.m.
Forming wh-questions in Shona: A comparative Bantu perspective

Lattimore 513

Jason Zentz
Postdoctoral Associate
Yale University

Bantu languages, which are spoken throughout most of sub-Saharan Africa, permit wh-questions to be constructed in multiple ways, including wh-in-situ, full wh-movement, and partial wh-movement. Shona, a Bantu language spoken by about 13 million people in Zimbabwe and Mozambique, allows all three of these types. In my dissertation, which I report on here, I conducted the first in-depth examination of Shona wh-questions, exploring the derivational relationships among these strategies.

Wh-in-situ questions have received a wide variety of treatments in the syntactic literature, ranging from covert or disguised movement to postsyntactic binding of the wh-phrase by a silent question operator. In Bantu languages, wh-in-situ questions are often taken to be derived via a non-movement relation (e.g., Carstens 2005 for Kilega, Diercks 2010 for Lubukusu, Muriungi 2003 for K??tharaka, Sabel 2000 for Kikuyu and Duala, Sabel & Zeller 2006 for Zulu, Schneider-Zioga 2007 for Kinande), but alternatives have rarely been considered. I demonstrate how movement-based analyses that have been proposed for wh-in-situ in non-Bantu languages make the wrong predictions for Shona wh-in-situ, which lacks word order permutation, extraction marking, island effects, and intervention effects. These properties provide support for the traditional Bantuist view that the relation between the pronunciation site of an in-situ wh-phrase and its scopal position in the left periphery is not movement; I claim that in Shona it is unselective binding.

Full wh-movement in Shona gives rise to questions that bear a certain similarity to English wh-questions. However, using a range of diagnostics including extraction marking, island effects, reconstruction effects, and the distribution of temporal modifiers, I argue that what appears to be full wh-movement in Shona actually has a cleft structure: the wh-phrase moves to become the head of a relative clause, which is selected by a copula in the matrix clause. Just as in wh-in-situ, an ex-situ wh-phrase is pronounced lower than its scopal position, and the relation between these two positions is established via unselective binding. Additional evidence for this proposal comes from the sensitivity of partial wh-movement to island boundaries below but not above the pronunciation site of the wh-phrase, a pattern that has been predicted by previous analyses (e.g., Abels 2012, Sabel 2000, Sabel & Zeller 2006) but for which empirical support has been lacking until now. I therefore unify full and partial wh-movement under a single analysis for cleft-based wh-ex-situ that involves a step of relativization (independently needed for relative clauses) and a step of unselective binding (independently needed for wh-in-situ



September 30, 2016—3:30 p.m.
The perception/production link at the individual and community levels: focusing on sound change

Lattimore 513

Andries Coetzee
Associate Professor of Linguistics
University of Michigan

This presentation reviews current research being conducted in the Phonetics Lab of the University of Michigan. Our Lab's research program focuses on community level variation in speech production and perception, and on how individual members of a community performs within the complex variable landscape of their speech community.

Since ongoing sound changes are characterized by variability, understanding the structure of variation, and in particular the relation between perception and production in individual members of a speech community, can shed light on how sound changes are initiated and how they progress through a speech community. Do perception and production norms change together, or are they partially independent such that change in the one can lead change in the other? If they change separately, which is more likely to change first? Are individuals who produce innovative forms also more likely to rely on the innovative cues in perception?

To investigate these questions, this presentation will focus on the results of a study on the ongoing process of tonogenesis in Afrikaans. In Afrikaans, the historical distinction between voiced and voiceless plosives is currently being replaced by a distinction between high and low tone on neighboring vowels. This presentation will show how this change is realized in the speech community, with particular focus on the relation between perception and production norms in individual members of the community. The presentation will end with a brief review of a currently ongoing study that uses eye-tracking technology and airflow measures to investigate the relationship between the perception and production of anticipatory nasalization in English ('sent' produced with a nasal vowel). The implications of these studies for theories about the cognitive representation of speech and theories of sound change will be considered.



May 11, 2016—12:00 p.m.
Does Predictability Affect Reference Form? It depends on the verb

Kresge Room, Meliora 269

Jennifer E. Arnold
Professor, Department of Psychology and Neuroscience
University of North Carolina, Chapel Hill

The structure of events appears to influence the way people talk about them. In some cases (see ex. 1), event roles have a much higher tendency to be mentioned again - that is, they are predictable. In emotion verbs like (1), Sandy is considered the expected cause of the scaring/fearing events, and is more likely to be mentioned again (Fukumura & van Gompel, 2010; Hartshorne et al., 2015; Kehler et al., 2008). In (2), Kathryn is the goal of the transfer event, and is expected to participate in the next event, thus making her more likely to be mentioned (Stevenson et al., 1994). Critically, these biases depend on the relation between the two clauses, where the implicit causality effects in (1) are supported by a causal continuation, and the goal bias in (2) is supported by a next-mention continuation.

1a. Sandy scared Kathryn because? 2a. Sandy threw the ball to Kathryn. Then?
1b. Kathryn feared Sandy because? 2b. Kathryn caught the ball from Sandy. Then?

A debated question is whether thematic role predictability affects the use of reduced referential expressions, like pronouns. Sentence-completion experiments have yielded conflicting data, with some authors arguing that pronouns are more common for predictable referents (Arnold, 2011), while others presenting data that thematic roles have no effect on pronoun use (Fukumura & van Gompel, 2010; Kehler et al., 2008).

I present the results of a series of studies, which examined this question in detail. We designed a novel story-telling task, in which participants heard a description of one panel, and provided an oral description of the second panel (see Fig. 1).

Participant hears:
?The butler gave a fur coat to the maid? OR ?The maid received a fur coat from the butler.? Response: {The butler / He?}

In experiments examining goal-source verbs, we found strong support for the hypothesis that thematic role does influence referential form. However, experiments examining emotion verbs presented mixed results. A corpus analysis suggests that these verb types may differ in the way they are used in discourse, affecting both the perceived predictability of discourse entities, and their relationship to discourse accessibility.

  • Arnold, J.E. (2001). The effect of thematic roles on pronoun use and frequency of reference continuation. Discourse Processing, 31(2), 137-162.
  • Fukumura, K. & van Gompel, R. P. G. (2010). Choosing anaphoric expressions: Journal of Memory and Language. 62, 52-66.
  • Hartshorne, J.K., O'Donnell, T.J., & Tenenbaum, J.B. (2015). The causes and consequences explicit in verbs. LCP, 30:6, 716-734.
  • Kehler, A., Kertz, L., Rohde, H. & Elman, J., (2008). Coherence and coreference revisited. Journal of Semantics, 25, 1-44.
  • Rosa, E. C., & Arnold, J. E. (under review). Predictability affects production: Thematic roles affect reference form selection. UNC Chapel Hill.
  • Stevenson, R., Crawley, R., & Kleinman, D. (1994). Thematic roles, focusing and the representation of events. LCP, 9, 519-548.


May 6, 2016—3:00 p.m.
TBA

513 Lattimore Hall

Zhong Chen
Assistant Professor
Department of Modern Languages, Rochester Institute of Technology



April 29, 2016—4:00 p.m.
What does syntax have to do with island effects?

513 Lattimore Hall

Rui Chaves
Associate Professor
Department of Linguistics, University at Buffao - SUNY



April 22, 2016—11:00 a.m.
TBA

Meliora 366

Jill Warker
Department of Psychology, University of Scranton



April 8, 2016—3:00 p.m.
TBA

513 Lattimore Hall

Jim Wood
Postdoctoral Associate
Department of Linguistics, Yale University



March 15, 2016—9:30 a.m.
Coreference and the context of alternatives

Meliora 366

Hannah Rohde
University of Edinburgh

The study of pragmatics examines the mechanisms underlying speakers' ability to construct meaning in context and hearers' ability to infer meaning beyond what a speaker has explicitly said. These abilities are taken to depend both on the properties of what is said as well as on considerations of what isn't said. In this talk, I present a series of psycholinguistic studies that highlight how the context of alternatives provides knowledge that is brought to bear on one pragmatic phenomenon, coreference. The context of alternatives is shown to guide *how* speakers refer (probabilities over choice of referring expression), whereas coherence-driven cues regarding alternative meanings capture *who* speakers are likely to refer to (prior probabilities over choice of mention). Listeners in turn can be understood to combine these probabilities to estimate the likely referent of an ambiguous expression, as predicted by a Bayesian model of coreference. What is most intriguing about the data is the apparent independence of contributions from factors related to message meaning (implicit causality, coherence) and those related to message form (information structure). I also discuss work in two other coreference domains in which the context of alternatives is relevant: the assessment of production costs and the role of focus marking in evoking a set of alternatives.



February 26, 2016—3:00 p.m.
Linguistic diversity and language contact: Amazonian perspectives

513 Lattimore Hall

Patience Epps
Associate Professor
Department of Linguistics, University of Texas at Austin



February 19, 2016—3:00 p.m.
Variant-centered variation and the 'like' conspiracy

513 Lattimore Hall

Aaron Dinkin
Assistant Professor
Department of Linguistics, University of Toronto


2015


December 3, 2015—12:30 p.m.
Learning syntax from millions of words

301B Meliora Hall

John Pate
Department of Linguistics, University at Buffalo

Grammar induction is the task of learning syntactic structures from strings of words, without observing those structures. Because children also do not observe syntactic structures, grammar induction systems provide a route to investigating the assumptions about grammatical form a child might make when learning syntax. However, previous grammar induction research has relied on expensive ``batch'' algorithms that re-analyze the entire dataset multiple times, and so has been limited to small datasets with only tens of thousands of word tokens. Such small data is likely too sparse to learn from word strings alone, so previous work has used word representations (such as part-of-speech tags) rather than words. The resulting systems are therefore of limited applicability to child language research.

In this talk, I will present a new streaming algorithm for Variational Bayesian Probabilistic Context Free Grammar inference that analyzes each sentence only once. This algorithm allows us to learn dependency syntax from the words (alone) of millions of word tokens and outperforms the batch algorithm both in absolute terms and when controlling for computational resources. Additionally, the results show that learning from part-of-speech tags leads to an objective function that is full of local optima that don't correspond to dependency syntax, but learning from words does not have this problem. These results improve the prospect of using grammar induction systems to understand the learning biases of syntax-acquiring children.



November 20, 2015—3:30 p.m.
Selection-Coordination Theory

513 Lattimore Hall

Sam Tilsen
Assistant Professor
Department of Linguistics, Cornell University

Phonological theories commonly analyze speech utterances as composed of hierarchically organized units, such as features/gestures, segments, moras, and syllables. Yet it is not well understood why this hierarchical organization is observed. This talk presents the selection-coordination theory of speech production, which holds that hierarchical organization emerges from a recurring trend in speech development whereby children acquire coordinative regimes of control over articulatory gestures that were previously competitively selected. In this framework, segments, moras, and syllables are understood as differently-sized instantiations of the same type of motor planning unit, and these units differ with regard to when in the course of development they dominate the organization of gestural selection. This talk will show how the theory provides explanatory accounts of patterns in phonological development, cross-linguistic variation in phonological structure, and articulatory patterns in speech.

Phonological theories commonly analyze speech utterances as composed of hierarchically organized units, such as features/gestures, segments, moras, and syllables. Yet it is not well understood why this hierarchical organization is observed. This talk presents the selection-coordination theory of speech production, which holds that hierarchical organization emerges from a recurring trend in speech development whereby children acquire coordinative regimes of control over articulatory gestures that were previously competitively selected. In this framework, segments, moras, and syllables are understood as differently-sized instantiations of the same type of motor planning unit, and these units differ with regard to when in the course of development they dominate the organization of gestural selection. This talk will show how the theory provides explanatory accounts of patterns in phonological development, cross-linguistic variation in phonological structure, and articulatory patterns in speech.



November 16, 2015—12:00 p.m.
Learning to become a native listener

301B Meliora Hall

Reiko Mazuka
Riken Brain Science Institute

The goal of our research is identify the processes by which human infants with no prior linguistic knowledge and highly limited cognitive skills acquire the ability to understand and manipulate highly complex language systems in a short time and without explicit instruction. The talk will present results from studies that investigated how Japanese infants learn certain characteristics of Japanese phonology, knowledge of which is considered prerequisite for the acquisition of abstract, symbolic properties of language. One distinctive characteristics of Japanese phonology, for example, is duration-based vowel distinction, which can be used for both lexical differentiation (obasan vs obaasan) and for phrasal/prosodic differentiation (dakara vs dakaraaa). How do babies learn that lexical and prosodic information systems are different, and how do they determine whether a given long or short vowel is being used lexically or prosodically? Our studies compare babies' behavioral responses with speech input provided by their environment, computational acquisition models, and brain imaging studies. The talk will discuss results from these and related studies, highlighting the unique opportunities that Japanese language properties provide to disentangle fundamental questions pertaining to acquisition.



November 6, 2015—3:30 p.m.
TUTORIAL ON DYNAMICAL SYSTEMS ANALYSIS IN THEORETICAL SYNTAX AND PHONOLOGY

513 Lattimore Hall

Khalil Iskarous
Department of Linguistics, University of California

Many contributors to theoretical syntax and phonology, e.g. Goldsmith, Uriagereka, Vergnaud, Idsardi, Smolensky, and Prince, have used dynamical systems analysis to make sense of some fundamental computational properties of natural language. Yet, dynamical systems analysis does not usually form part of the linguistics curriculum. In this tutorial, dynamical systems analysis will be introduced from scratch, and then some basic analogies will be drawn between deep computational concepts in linguistic theory, and dynamical computation.



September 28, 2015—12:00 p.m.

Hylan 105

Jasmeen Kanwaal
Linguistics and Cognitive Science at UC San Diego



September 25, 2015—2:00 p.m.
Informationally redundant utterances trigger pragmatic inferences

513 Lattimore Hall

Ekaterina Kravtchenko
PhD student in Vera Demberg's lab
Saarland University

Work in pragmatics shows that speakers typically avoid stating information already given in the discourse (Horn, 1984). However, it's unclear how listeners interpret utterances which assert material that can be inferred using prior knowledge. We argue that informationally redundant utterances can trigger context-dependent implicatures, which increase utterance utility in line with listener expectations (Atlas & Levinson, 1981; Horn, 1984). In two experiments, we look at utterances which refer to event sequences describing common activities (scripts, such as 'going to a grocery store').

The first experiment shows that listeners may assign informationally redundant event mentions (such as 'John went to the store. He paid the cashier!') an 'informative' pragmatic interpretation, by reinterpreting the activity in question as relatively atypical in context (i.e. 'John does not typically pay the cashier'). Such a (re-)interpretation does not arise for event mentions that are informative either a priori, or in context. A second experiment, which replaced the exclamation point at the end of the utterance with a period, however, shows that the effect is substantially tempered when the utterance is not otherwise marked as important or surprising. This shows that discourse status, independent of the linguistic content of an utterance, can influence the likelihood of it giving rise to a specific pragmatic inference.

Overall, these studies show that explicit mention of highly inferable events may be systematically reconciled with an assumption that a speaker is being informative, giving rise to context-dependent implicatures regarding event typicality. This effect, however, is modulated by the informational status of the utterance, possibly similar to the effects of prosody on implicature generation. Overall, the results suggest that excessive informational redundancy of event utterances is perceived as anomalous, and that listeners alter their situation models in order to accommodate it.



June 3, 2015—12:00 p.m.
Abstract knowledge and item-specific experience in language processing and change

Kresge Room, Meliora 269

Emily Morgan
PhD Graduate Student
Department of Linguistics, University of California, San Diego

A pervasive question in language research is how we reconcile abstract/generative linguistic knowledge with knowledge of specific lexical items' idiosyncratic properties. For example, many binomial expressions of the form "X and Y" have a preferred order (e.g. "bread and butter" > "butter and bread"), but the source of these preferences remains largely unknown. Preferences might arise from violable constraints referencing the semantic, phonological, and lexical properties of the component words (e.g. short before long), or they might also derive from frequency of one's experience with a binomial's two orderings. I will argue that abstract knowledge and item-specific experience trade off rationally and gradiently in determining binomial ordering preferences: the more experience a speaker has with a binomial, the more heavily they rely on that experience over abstract constraints. I will demonstrate that this tradeoff is crucial for explaining both online sentence processing and language structure/change: In forced-choice judgments and self-paced reading tasks, I will demonstrate that the source of preferences gradually shifts from abstract knowledge to item-specific experience as amount of experience increases. Moreover, using corpus analysis and computational modeling, I will demonstrate that the strength of ordering preferences also depends upon the amount of experience one has with an expression: abstract knowledge creates weak preferences for infrequently attested items, while item-specific experience strengthens those preferences for more frequently attested items. These findings support theories of grammar that flexibly allow for both compositional generation and holistic reuse of stored examples.



May 29, 2015—3:30 p.m.
Cross-linguistic differences in disagreements arising from descriptive and evaluative propositions

513 Lattimore Hall

E. Allyn Smith
Associate Professor
University of Quebec at Montreal

Semanticists, pragmaticists, philosophers, and others have recently been interested in disagreements arising from evaluative propositions (especially those containing so-called 'predicates of personal taste"), as in (1), and their theoretical implications.

  1. A: This soup is tasty. B: No it isn't.
  2. A: Rochester is in Quebec. B: No it isn't.
  3. A: This soup is tasty, in my opinion. B: # No it isn't

The idea is that, as compared to a descriptive proposition like (2A), evaluative propositions express the opinion of the speaker, but refuting them doesn't seem to deny that the speaker holds such an opinion (Kolbel 2003, Lasersohn 2005, etc.). This would, in principle, make them similar to sentences like (3), but here, direct disagreement is not felicitous (Stevenson 2007). Stevenson argued that the same can be said of epistemic modals such as 'might': you can say 'no' to the fact that Elizabeth might visit if you know otherwise, but if someone says that they don't know whether Elizabeth will visit, saying 'no' cannot indicate that she won't.

In this talk I will present offline felicity judgment data from English and Spanish two-turn oral dialogues showing that there are differences with respect to these judgments, which creates a further puzzle. I will compare various explanations for these new data, drawing on ideas present in Stojanovic 2007 and Umbach 2012. I will further discuss the interplay of various factors in these data, including cultural politeness differences (introducing data from another dialect of Spanish with known differences in cultural norms). As time permits, I will also present data from Catalan and French.



May 8, 2015—3:30 p.m.
Bayesian pragmatics: lexical uncertainty, compositionality, and the typology of conversational impli

513 Lattimore Hall

Roger Levy
Associate Professor
Department of Linguistics, University California San Diego

A central scientific challenge for our understanding of human cognition is how language simultaneously achieves its unbounded yet highly context-dependent expressive capacity. In constructing theories of this capacity it is productive to distinguish between strictly semantic content, or the "literal" meanings of atomic expressions (e.g., words) and the rules of meaning composition, and pragmatic enrichment, by which speakers and listeners can rely on general principles of cooperative communication to take understood communicative intent far beyond literal content. However, there has historically been only limited success in formalizing pragmatic inference and its relationship with semantic composition. Here I describe recent work within a Bayesian framework of interleaved semantic composition and pragmatic inference, building on the Rational Speech-Act model of Frank and Goodman and the game-theoretic work of Degen, Franke, and Jäger. These models formalize the goal of linguistic communicative acts as bringing the beliefs of the listener into as close an alignment as possible with those of the speaker while maintaining brevity. First I show how two major principles of Levinson's typology of conversational implicature fall out of the most basic Bayesian models: Q(uantity) implicature, in which utterance meaning is refined through exclusion of the meanings of alternative utterances; and I(nformativeness) implicature, in which utterance meaning is refined by strengthening to the prototypical case. Q and I are often in tension; I show that the Bayesian approach constitutes the first theory making quantitative predictions regarding their relative strength in interpretation of a given utterance, and present evidence from a large-scale experiment on interpretation of utterances such as "I slept in a car" (was it my car, or someone else's car?) supporting the theory's predictions. I then turn to questions of compositionality, focusing on two of the most fundamental building blocks of semantic composition, the words "and" and "or". Canonically, these words are used to coordinate expressions whose semantic content is least partially disjoint ("friends and enemies", "sports and recreation"), but closer examination reveals that they can coordinate expressions whose semantic content is in a one-way inclusion relation ("roses and flowers", "boat or canoe") or even in a two-way inclusion relation, or total semantic equivalence ("oenophile or wine-lover"). But why are these latter coordinate expressions used, and how are they understood? Each class of these latter expressions falls out as a special case of our general framework, in which their prima facia inefficiency for communicating their literal content triggers a pragmatic inference that enriches the expression's meaning in the same ways that we see in human interpretation. More broadly, these results illustrate the explanatory reach and power of recursive, compositional probabilistic models for the study of linguistic meaning and pragmatic communication.



May 6, 2015—12:00 p.m.
Language comprehenders as reverse engineers

Kresge Room, Meliora 269

Roger Levy
Associate Professor
Department of Linguistics, University California San Diego

From the last several decades of research we know that human language comprehension is highly optimized to the demands presented by real-time spoken and written input. We are finely tuned to the detailed statistics of our linguistic experience, yet retain an extraordinary capacity to generalize beyond that experience to novel comprehension environments. A leading hypothesis regarding this capacity for generalization is that comprehension involves implicitly deploying structured generative models of language production, with the comprehender effectively "reverse-engineering" the speaker's intended message through Bayesian inference. Here I present work elucidating the structure of these generative models under this hypothesis. In the first part of the talk I discuss our recent work on noisy-channel models of language comprehension, in which a speaker's intended message is distorted through processes including speaker error, perceptual noise, and memory limitation before analysis by our system of language understanding. I present results showing how noisy-channel comprehension can lead comprehenders to entertain and even adopt grammatical interpretations of an input inconsistent with its literal content. I also present results extending the range of documented noise operations. In the second part of the talk I explore how comprehenders model speaker choice in syntactic alternations influenced by multiple factors. For example, preferences in the dative alternation ("Pat gave Kim a book" versus "Pat gave a book to Kim") have been argued to reflect (i) differences in the shade of meaning encoded by each syntactic option, and (ii) principles of optimal linear ordering such as putting short constituents before long. If both (i) and (ii) are true and comprehenders model the syntactic choice as a generative process driven by these multiple causes, we should see explaining-away effects between linear-ordering optimality and inferred meaning intent in comprehension. We demonstrate these effects for the first time. More generally, this work underscores the power of using generative models to account for human language comprehension, and opens the door to a range of further explorations of the structure of this generative knowledge.



May 1, 2015—12:00 p.m.
Exploring the limits of syntactic structures

513 Lattimore Hall

Jean-Pierre Koenig and Karin Michelson
Professor and Chair
Department of Linguistics, University at Buffalo SUNY

Syntax has played a central role in investigations of the nature of human languages. But, there are at least two distinct ways of conceiving of syntax: the set of rules that enable speakers and listeners to combine the meaning of expressions (compositional syntax), or the set of formal constraints on the combinations of expressions (formal syntax). The question that occupies us in this talk is whether all languages include a significant formal syntax component or whether there are languages in which most syntactic rules are exclusively compositional. Our claims are (1) that Oneida (Northern Iroquoian) has almost no formal syntax component and is very close to a language that includes only a compositional syntax component and (2) that the little formal syntax Oneida has does not require making reference to syntactic features. Our analysis of Oneida suggests that what is often taken as characteristic of human languages (e.g., syntactic selection/argument structure, syntactic binding, syntactic unbounded dependencies, syntactic parts of speech) is merely overwhelmingly frequent in the world's languages. Our research also suggests that a critical function of compositional syntax is to manage the binding of semantic variables, a function anticipated by Quine's work on the nature of (semantic) variables.



April 10, 2015—3:30 p.m.
Multiple Perspectives on Understanding Prosodic Development

513 Lattimore Hall

Jill Thorson
Postdoctoral Research Associate
Communication Analysis and Design Laboratory, Northeastern University

Infants are born with sensitivities to their native language's prosody (i.e., melody and rhythm). My research program is designed to understand the ways in which this attunement to prosody affects early language development over the first years of life. Specifically, this work concentrates on how prosody impacts early attentional processing, word learning, and speech production, with a focus on the importance of including a phonological account alongside an acoustic-phonetic one. Two lines of inquiry deploy a variety of research methods (e.g., eyetracking, corpora, and speech elicitation) and consider the role of prosody from a perceptual and a productive perspective. Additionally, the role of technology in methodological innovation is explored, such as how touch-screen interfaces and voice synthesis can effectively address questions regarding language learning in atypical populations. Future research on early language acquisition will investigate the benefits of integrating these various perspectives and methodologies, and how this multi-faceted approach can better understand typical and atypical prosodic development.



April 10, 2015—11:00 a.m.
Learning to Execute Natural Language

Meliora 366

Percy Liang
Assistant Professor of Computer Science
Stanford University

A natural language utterance can be thought of as encoding a program, whose execution yields its meaning. For example, "the tallest moun tain" denotes a database query whose execution on a database produces "Mt. Everest." We present a framework for learning semantic parsers that maps utterances to programs, but without requiring any annotated programs. We first demonstrate this paradigm on a question answering task on Freebase. We then show how that the same framework can be extended to the more ambitious problem of querying semi-structured Wikipedia tables. We believe that our work provides a both a practical way to build natural language interfaces and an interesting perspective on language learning that links language with desired behavior.



February 6, 2015—3:30 p.m.
Cross-linguistic differences in disagreements arising from descriptive and evaluative propositions

513 Lattimore Hall

E. Allyn Smith
Assistant Professor
University of Quebec at Montreal

Semanticists, pragmaticists, philosophers, and others have recently bee n interested in disagreements arising from evaluative propositions (especially those containing so-called 'predicates of personal taste"), as in (1), and their theoretical implications.

  1. A: This soup is tasty. B: No it isn't.
  2. A: Rochester is in Quebec. B: No it isn't.
  3. A: This soup is tasty, in my opinion. B: # No it isn't.

The idea is that, as compared to a descriptive proposition like (2A), valuative propositions express the opinion of the speaker, but refuting them doesn't seem to deny that the speaker holds such an opinion (Kolbel 2003, Lasersohn 2005, etc.). This would, in principle, make them similar to sentences like (3), but here, direct disagreement is not felicitous (Stevenson 2007). Stevenson argued that the same can be said of epistemic modals such as 'might': you can say 'no' to the fact that Elizabeth might visit if you know otherwise, but if someone says that they don't know whether Elizabeth will visit, saying 'no' cannot indicate that she won't.

In this talk I will present offline felicity judgment data from English and Spanish two-turn oral dialogues showing that there are differences with respect to these judgments, which creates a further puzzle. I will compare various explanations for these new data, drawing on ideas present in Stojanovic 2007 and Umbach 2012. I will further discuss the interplay of various factors in these data, including cultural politeness differences (introducing data from another dialect of Spanish with known differences in cultural norms). As time permits, I will also present data from Catalan and French.



January 23, 2015—3:30 p.m.
The Dene verbal compound: representing the complex inflectional system of the Dene (Athabaskan) verb

513 Lattimore Hall

Joyce McDonough
Associate Professor
University of Rochester, Linguistcs and Brain and Cognitive Sciences

Within a Word and Paradigm appro ach to morphology words, not morphemes, are the basic units in the lexicon (Milin et. al., 2009; Ackerman & Malouf, 2012; Blevins, 2014, 2015; Plag & Baayens, 2008, Baayens et.al. 2014, 2015). Fully inflected words are lexical units, organized into paradigms, making paradigms, which encode the relationship between words, fundamental objects in the lexicon. In this framework, much work has been done on nominal inflection and derivational systems. Much less has been done on the more complex inflectional systems of verbal morphology, which may encode rich morphosyntactic functions. In this talk I will lay out the structure of a typologically unusual and highly complex system, the Dene (Athabaskan) verb word, traditionally captured by a position class template of around 23 prefix positions used to order verbal morphemes. I'll demonstrate that is an unworkable system. Instead, the Dene verb is a unusual but simple and principled variation on compounding. The model is base d on evidence from phonetic studies and lexical patterns.


2014


December 12, 2014—12:30 p.m.
"She be acting like she's black": Linguistic blackness among Korean

513 Lattimore Hall

Elaine Chun
Associate Professor (English)
University of South Carolina

Research on the use of African American English (AAE) by speakers who do not identify as African American has largely focused on how performances of racial 'crossing' (Rampton 1995) may be used to construct masculinity, often in ways that reproduce stereotypes of race and gender (Bucholtz 1999; Chun 2001; Reyes 2005; Bucholtz and Lopez 2011). Such work has drawn attention to at least a few important facts: first, a variety that linguists have classified as an ethnolect of a particular ethnic group can be used in meaningful ways by speakers outside the group; second, ethnolectal features are complexly related to other social dimensions, such as gender and class; and third, language practices have sociocultural consequences for individual identities and community ideologies.

Two concerns that remain are (1) how linguists can productively continue the important project of ethnolectal description--for example, identifying distinctive elements of AAE in ways that recognize meaningful outgroup language use, and (2) how linguists can analyze outgroup uses of AAE without simplistically suggesting that these uses necessarily reproduce stereotypes of black masculinity. In order to address these concerns, I consider the sociolinguistic status of features described by linguists as belonging to AAE, namely, six lexical or morpho-syntactic elements: habitual be, neutral third-person singular verb (e.g., she don't), multiple negation, ain't, the address term girl, and the pronoun y'all. By examining about 100 tokens used by five female youth who identify as Korean American, I discuss some of the conceptual challenges that arise for an ethnolectal model of language and draw on some sociolinguistic and linguistic anthropological concepts, such as ideology, indexicality, persona, stance, voice, and authentication to address these challenges. Finally, I show how qualitative methods of discourse analysis, which attend to the emergent complexity of how language can invoke social meanings, can usefully contribute to our understanding of how linguistic forms relate to social meaning, yet in ways that may still remain complementary with our projects of ethnolectal description.



November 14, 2014—9:00 a.m.
PraatR: An architecture for controlling the phonetics software Praat with the R programming language

513 Lattimore Hall

Aaron Albin
Indiana University-Bloomington

An increasing number of researchers are using the R programming language (http://www.r-project.org/)for the visualization and statistical modeling of phonetic data. However, R's capabilities for analyzing soundfiles and extracting acoustic measurements are still limited compared to free-standing phonetics software such as Praat (http://www.fon.hum.uva.nl/praat/). As such, it is typical to extract the acoustic measurements in Praat, export the data to a textfile, and then import this file into R for analysis. This process of manually shuttling data from one program to the other slows down and complicates the analysis workflow.

This workshop will feature an R package (`PraatR') designed to overcome this inefficiency. Its core R function sends a shell command to the operating system that invokes the command-line form of Praat with an associated Praat script. This script imports a file, applies a Praat command to it, and then either brings the output directly into R or exports the output as a textfile. Since all arguments are passed from R to Praat, the full functionality of the original Praat command is available inside R, making it possible to conduct the entire analysis within a single environment. Moreover, with the combined power of these two programs, many new analyses become possible. Further information on PraatR can be found at http://www.aaronalbin.com/praatr/.

At this workshop, the creator of PraatR will first present a conceptual overview of the package, followed by several hands-on exercises on participants' laptop computers illustrating its range of functionality. At the end of the workshop, the presenter will be available for brief consultations about how PraatR can help you in your own research.

Attendance is limited to 20 participants on a first-come-first-served basis. If you are interested in coming to the workshop, please send an e-mail stating so to Wil Rankinen at wrankine@ur.rochester.edu.



October 17, 2014—9:30 a.m.
Gesture as a mechanism of change

Meliora 366

Susan Goldin-Meadow
Beardsley Ruml Distinguished Service Professor
University of Chicago

The spontaneous gestures that people produce when they talk can index cognitive instability and reflect thoughts not yet found in speech. But gesture can go beyond reflecting thought to play a role in changing thought. I consider whether gesture brings about change because it is itself an action and thus brings action into our mental representations. I provide evidence for this hypothesis but suggest that it's not the whole story. Gesture is a special kind of action--it is representational and thus more abstract than direct action on objects, which may be what allows gesture to play a role in learning.



September 12, 2014—2:00 p.m.
Morphology as a complex discriminative system

513 Lattimore Hall

Jim Blevins
Department of Theoretical and Applied Linguistics, Cambridge University

A number of converging lines of research have recently coalesced into an approach to morphology that combines classical WP models with contemporary data-driven methodologies. One component of this approach is distributional view of language structure and language learning. Another is a complex system conception of morphological patterns and inventories. These components are united by a dynamic communicative pressures, rather than in terms of derivational relations or static constraint satisfaction. This talk outlines some of the properties and implications of this perspective and reviews evidence that supports this type of approach over simple system models of morphology.



April 24, 2014—1:00 p.m.
The temporal structure of auditory perceptual experience

Kresge Room, Meliora 269

David Poeppel
Professor
Psychology and Neural Science Cognition & Perception, New York University

Speech and other dynamically changing auditory signals (and also visual stimuli) typically contain critical information required for
successful decoding at multiple time scales. What kind of neuronal infrastructure forms the basis for the requisite multi-time resolution processing? A series of neurophysiological experiments suggests that intrinsic neuronal oscillations at different, ‘privileged’ frequencies may provide some of the underlying mechanisms. In particular, to achieve parsing of a naturalistic input signal into manageable chunks, one mesoscopic-level mechanism consists of the sliding and resetting of temporal windows, implemented as phase resetting of intrinsic oscillations on privileged time scales. The successful resetting of neuronal activity provides time constants – or temporal integration windows – for parsing and decoding signals. One emerging generalization is that acoustic signals must contain some type of
edge, i.e. a discontinuity that the listener can use to chunk the signal at the appropriate granularity. Although the ‘age of the edge’
is over for vision, acoustic edges likely play an important (and slightly different) causal role in the successful perceptual analysis of complex auditory signals.



April 11, 2014—1:00 p.m.
Indexicals, Centers and Perspective

Meliora 366

Craige Roberts
Professor, Department of Linguistics
The Ohio State University

I argue for a theory of demonstratives in which:

(a) they're anaphoric (as I argued in Roberts 2002) and in that respect are definites like definite descriptions and pronouns,
but:

(b) they're unlike the other definites in that they really are essentially indexical, something that isn't adequately captured by King (2001), Roberts (2002), or Elbourne (2008),
and that:

(c) we can improve on the account of indexicality in Kaplan (1977), as criticized by Heim 1985, by adopting a view of indexicals in which their central feature is anchoring to a Discourse Center, a self-attributing doxastic agent.

A Discourse Center is a counterpart in the context of utterance of the notion of a Center in Lewis (1979), the latter theory modified as in Stalnaker (2008). Discourse Centers are argued to play three kinds of roles in interpretation:

(i) they are crucial features of a theory of de se interpretation, as in Lewis/Stalnaker;
but here they serve two new roles as well:

(ii) they are the presupposed anaphoric anchors for indexicals; and

(iii) they also serve as arguments of a perspective operator, in a modification of Aloni (2001), permitting an account of de re belief attributions involving all kinds of definite NPs, including indexicals themselves.

Among other things, this will permit a more flexible, perspicuous account of shifted indexicals in languages like Amharic (Schlenker 2003, Anand & Nevins 2004, Deal 2013, Sudo 2012), and a natural account of so-called fake indexicals of Kratzer (2009).



March 28, 2014—1:00 p.m.
Statistical learning in semi-real language acquisition

CSB 601

Casey Lew-Williams
Department of Communication Sciences and Disorders, Northwestern University

Infants and toddlers have a prodigious ability to find structure (such as words) in patterned input (such as language). Learning regularities between sounds and words often occurs seamlessly in early development, leading some to conclude that statistical learning plays a role in enabling language in the first place. This might be true, or alternatively, it might be an irrelevant artifact of distilled laboratory tasks. The ultimate explanatory power depends partially on whether we define statistical learning narrowly (transitional probabilities between syllables) or broadly (any kind of input-based pattern extraction), and partially on whether statistical learning can scale up to explain natural language acquisition. Here I ask: Can statistical learning withstand the complexity inherent in (somewhat) real learning environments? I will present a series of studies that test how infants learn when presented with variability in utterance length, word length, number of talkers, social/communicative cues, and frequency resolution. To conclude, I will briefly address the question of scalability by turning to an important outcome of early statistical learning -- the ability to process language efficiently in real time -- which falls by the wayside when listeners don't accumulate language experience like a baby.



February 14, 2014—1:00 a.m.
Building a Bayesian bridge between the physics and the phenomenology of social interaction

Meliora 366

Nathaniel Smith
Research Associate, Institute for Language, Cognition and Computation
University of Edinburgh

What is word meaning, and where does it live? Both naive intuition and scientific theories in fields such as discourse analysis and socio- and cognitive linguistics place word meanings, at least in part, outside the head: in important ways, they are properties of speech communities rather than individual speakers. Yet, from a neuroscientific perspective, we know that actual speakers and listeners have no access to such consensus meanings: the physical processes which generate word tokens in usage can only depend directly on the idiosyncratic goals, history, and mental state of a single individual. It is not clear how these perspectives can be reconciled. This gulf is thrown into sharp perspective by current Bayesian models of language processing: models of learning have taken the former perspective, and models of pragmatic inference and implicature have taken the latter. As a result, these two families of models, though built using the same mathematical framework and often by the same people, turn out to contain formally incompatible assumptions. Here, I'll present the first Bayesian model which can simultaneously learn word meanings and perform pragmatic inference. In addition to capturing standard phenomena in both of these literatures, it gives insight into how the literal meaning of words like "some" can be acquired from observations of pragmatically strengthened uses, and provides a theory of how novel, task-appropriate linguistic conventions arise and persist within a single dialogue, such as occurs in the well-known phenomenon of lexical alignment. Over longer time scales such effects should accumulate to produce language change; however, unlike traditional iterated learning models, our simulated agents do not converge on a sample from their prior, but instead show an emergent bias towards belief in more useful lexicons. Our model also makes the interesting prediction that different classes of implicature should be differentially likely to conventionalize over time. Finally, I'll argue that the mathematical "trick" needed to convince word learning and pragmatics to work together in the same model is in fact capturing a real truth about the psychological mechanisms needed to support human culture, and, more speculatively, suggest that it may point the way towards a general mechanism for reconciling qualitative, externalist theories of social interaction with quantitative, internalist models of low-level perception and action, while preserving the key claims of both approaches.


2013


December 5, 2013—1:00 a.m.
The Detachment Principle and the syntax of pragmatic particles

Jila Ghomeshi
Department of Linguistics, University of Manitoba



November 22, 2013—1:00 a.m.

Elika Bergelson
University of Rochester, Aslin Lab, Brain & Cognitive Sciences



November 19, 2013—1:00 a.m.
Danes call People with Down syndrome 'mongol': politically incorrect language and ethical engagement

Don Kulick
Department of Comparative Human Development at the University of Chicago



November 8, 2013—1:00 a.m.

Scott Fraundorf
University of Rochester, Jaeger Lab, Brain & Cognitive Science



October 31, 2013—1:00 a.m.
Field Linguistics Talk Series

Eva-Maria Roessler



October 25, 2013—1:00 a.m.
Determining If A Language Underwent Prehistoric Creolization

Scott Paauw
University of Rochester, Department of Linguistics



October 4, 2013—1:00 a.m.
A Category Neutral Simulative Plural: Evidence From Turkish

Solveiga Armoskaite
University of Rochester, Department of Linguistics



September 20, 2013—1:00 a.m.
Constraints of the Binding Theory: Evidence from Visual World Eye-Tracking

CLS Colloquia Room

Jeffrey T. Runner
Associate Professor, Linguistics and Brain & Cognitive Sciences
University of Rochester, Department of Linguistics



May 14, 2013—1:00 a.m.
Language documentation among the Bagyeli hunter-gatherers of Cameroon

Nadine Borchardt
Humboldt University, Berlin



April 26, 2013—1:00 a.m.
What's at issue? Exploring content in context

Judith Tonhauser
Associate Professor
Department of Linguistics, The Ohio State University



April 11, 2013—1:00 a.m.
Implicit and explicit neural mechanisms supporting language processing

Laura Batterink
University of Oregon



February 22, 2013—1:00 a.m.
Gapping is VP-ellipsis

Maziar Toorsarvandi
American Council of Learned Societies New Faculty Fellow Department of Linguistics and Philosophy



February 20, 2013—1:00 a.m.
Polarity particles

Floris Roelofsen
Research Associate, Institute for Logic, Language and Computation
University of Amsterdam



February 8, 2013—1:00 a.m.
Grammatical Number and Individuation

Scott Grimm
Postdoctoral Researcher Department of Translation and Language Sciences
Pompeu Fabra University



February 7, 2013—1:00 a.m.
The Phonology of Seneca

Wallace Chafe and Marianne Mithun
Professors of Linguistics, Department of Linguistics
University of California, Santa Barbara



February 4, 2013—1:00 a.m.
Investigating the semantic/pragmatic interface through sign language structure: the case of scalar implicature

Kathryn Davidson
Postdoctoral Fellow, Linguistics Department
University of Connecticut



January 18, 2013—1:00 a.m.
QUDs and at-issueness in Yucatec Maya attitude reports

Scott AnderBois
Visiting Faculty Linguistics Department


2012


November 9, 2012—1:00 a.m.
The Short Answer: Implications for Direct Compositionality (and vice-versa)

Pauline Jacobson
Brown University



October 14, 2012—1:00 a.m.
The Ket language of Siberia

Edward Vajda
Western Washington University



June 7, 2012—1:00 a.m.
The Substance of Song

Sally Treloyn
University of Melbourne



April 9, 2012—1:00 a.m.

Klinton Bicknell
Department of Psychology
UC San Diego



April 4, 2012—1:00 a.m.

Bozena Pajak
Department of Linguistics
UC San Diego



March 30, 2012—1:00 a.m.

Emily Tucker Prud'hommeaux
Computer Science
Oregon Health & Science University



February 17, 2012—1:00 a.m.
Three challenges of verb learning, and how toddlers use linguistic context to meet them

Sudha Arunachalam
Speech, Language & Hearing Sciences
Boston University


2011


December 5, 2011—1:00 a.m.

Victor Kuperman
Department of Linguistics and Languages
McMaster University



October 27, 2011—1:00 a.m.

Sarah Brown-Schmidt
Department of Psychology
University of Illinois at Urbana-Champaign



October 13, 2011—1:00 a.m.

Chris Potts
Department of Linguistics
Stanford University



October 4, 2011—1:00 a.m.

Meghan Sumner
Department of Linguistics
Stanford University



May 12, 2011—1:00 a.m.

Cynthia Fisher
Psychology Department
University of Illinois



May 9, 2011—1:00 a.m.

Herb Clark
Professor of Psychology
Stanford University



April 21, 2011—1:00 a.m.
Semantic Similarity, Predictability, and Models of Sentence Processing

Doug Roland and Hongoak Yun
The University of Buffalo



April 18, 2011—1:00 a.m.
Cue-Based Argument Interpretation

Thomas Hörberg
Department of Linguistics
Stockholm University



April 14, 2011—1:00 a.m.
The Encoding-Retrieval Relationship in Sentence Comprehension (and Production)

Philip Hofmeister
University of California - San Diego



April 6, 2011—1:00 a.m.

Raphael Berthele
University of Bern



March 31, 2011—1:00 a.m.
Meaning, Context and Representation

Ed Holsinger
Department of Lingustics
University of Southern California



March 29, 2011—1:00 a.m.
Don't rush the navigator: Audience design in language production is hard to establish, but easier to maintain

Jennifer M. Roche
Department of Psychology
University of Memphis



March 16, 2011—1:00 a.m.
Anuenue Kukona

Anticipation, local coherences, and the self-organization of cognitive structure in sentence processing
Department of Psychology
University of Connecticut



February 15, 2011—1:00 a.m.
Game Theoretic Pragmatics

Gerhard Jaeger
Department of Linguistics
University of Tuebingen


2010


May 24, 2010—1:00 a.m.
Uncovering the Mechanisms of Audiovisual Speech Perception: Architecture, Decision Rule, and Capacity

Nicholas Altieri
Psychological and Brain Sciences
Indiana University



April 21, 2010—1:00 a.m.
Narrative Combinatorics: Roleshifting Versus "Aspect" in ASL Grammar

Frank Bechter



April 19, 2010—1:00 a.m.
Learning Biases and the Emergence of Typological Universals of Syntax

Jennifer Culbertson
Cognitive Science Department
Johns Hopkins University



April 14, 2010—1:00 a.m.
Phonological Information Integration in Speech Perception

Noah H. Silbert
Psychological and Brain Sciences
Indiana University



April 12, 2010—1:00 a.m.
Predicting and Explaining Babies

LouAnn Gerken
Professor of Psychology and Linguistics
University of Arizona



March 15, 2010—1:00 a.m.
From Sounds to Words: Bayesian Modeling of Early Language Acquisition

Sharon Goldwater
School of Informatics
University of Edinburgh



February 22, 2010—1:00 a.m.
Implicit Learning in the Language Production System is Revealed in Speech Errors

Gary Dell
Psychology
University of Illinois at Urbana-Champaign



January 20, 2010—1:00 a.m.
Fast, Smart and Out of Control

Jesse Snedeker
Department of Psychology
Harvard University


2009


December 7, 2009—1:00 a.m.
The Role of Phonetic Detail, Auditory Processing and Language Experience in the Perception of Assimilated Speech

Meghan Clayards
Centre for Research on Language Mind and Brain
McGill University



October 19, 2009—1:00 a.m.
The Irrelevance of Hierarchical Structure to Sentence Processing

Stefan Frank
Postdoc, Institute for Language, Logic and Computation
University of Amsterdam



September 14, 2009—1:00 a.m.
The Number of Meanings of English Number Words

Chris Kennedy
Department of Linguistics
University of Chicago



June 26, 2009—1:00 a.m.
Self-Applicable Probabilistic Inference Without Interpretive Overhead

Oleg Kiselyov (FNMOC) and Chung-chieh Shan (Rutgers)



June 4, 2009—1:00 a.m.
Learning to Learn, Simplicity, and Sources of Bias in Language Learning

Amy Perfors
University of Adelaide



April 15, 2009—1:00 a.m.
Learning a Talker's Speech

Tanya Kraljic
Center for Research in Language
UC San Diego



April 8, 2009—1:00 a.m.
The Phonetic Traces of Lexical Access

Matt Goldrick
Department of Linguistics
Northwestern University



February 20, 2009—1:00 a.m.
Discourse-Driven Expectations in Sentence Processing

Hannah Rohde
Department of Linguistics
Northwestern University



February 12, 2009—1:00 a.m.
Big Changes in Object Recognition Between 18 and 24 Months: Words, Categories and Action

Linda Smith
Indiana University


2008


December 4, 2008—1:00 a.m.
What Do Words Do?

Gary Lupyan
University of Pennsylvania



November 11, 2008—1:00 a.m.
What Were They Thinking? Finding and Extracting Opinions in the News

Claire Cardi
Cornell University



October 16, 2008—1:00 a.m.
Production of Ungrammatical Utterances: The Case of Resumptive Pronouns

Ash Asudeh
Institute of Cognitives of Science & School of Linguistics and Language Studies
Carleton University



October 16, 2008—1:00 a.m.
The Phonetics of Phonological Quantity in Inari Saami

Ida Toivonen
School of Linguistics and Applied Language Studies
Carleton University



October 6, 2008—1:00 a.m.
Towards Discourse Meaning: Complexity of Dependencies at the Discourse Level and at the Sentence Level

Aravind Joshi
Computer Science
University of Pennsylvania



April 18, 2008—1:00 a.m.
Bridging the Gap between Syntax and the Lexicon: Computational Models of Acquiring Multiword Lexemes

Suzanne Stevenson
Computer Science
University of Toronto



April 9, 2008—1:00 a.m.
Inducing Meaning from Text

Dan Jurafsky
Linguistics
Stanford University



March 20, 2008—1:00 a.m.
The Parallel Architecture and its Role in Cognitive Science

Ray Jackendoff
Center for Cognitive Studies
Tufts University


2007


December 4, 2007—1:00 a.m.
Disfluencies in Dialogue: Attention, Structure and Function

Hannele Nicholson
Lingustics
Cornell University



September 25, 2007—1:00 a.m.
Determinants of parsing complexity: A computational and empirical investigation

Shravan Vasishth
Linguistics
University of Potsdam



September 24, 2007—1:00 a.m.
The Event-Related Optical Signal (Eros): A New Neuroimaging Tool for Language Processing Research

Susan Garnsey
Psychology
University of Illinois at Urbana-Champaign



May 14, 2007—1:00 a.m.

Lisa Pearl
Linguistics
University of Maryland



May 2, 2007—1:00 a.m.
Encoding and Retrieving Syntax with Prosody

Michael Wagner
Linguistics
Cornell University



April 26, 2007—1:00 a.m.
Probabilistic Models of Adaptation in Human Parsing

Frank Keller
HRC
University of Edinburgh



April 18, 2007—1:00 a.m.
Expectations, locality, and competition in syntactic comprehension

Roger Levy
Linguistics
UCSD



April 12, 2007—1:00 a.m.

Philip Hofmeister
Linguistics
Stanford University



April 6, 2007—1:00 a.m.
A Cognitive Substrate for Human-Level Intelligence

Nick Cassimatis
Computer Science
RPI



February 21, 2007—1:00 a.m.
Linguistic Knowledge is Probabilistic: Evidence from Pronunciation

Suzanne Gahl
University of Chicago


2003–2004


November 3, 2003—1:00 a.m.
Statistical Learning: What Goes In, and What Comes Out

Jenny Saffran
Department of Psychology
University of Wisconsin Madison


2002–2003


May 28, 2003—1:00 a.m.
From Ears to Categories: Intermediate Steps in Speech Recognition.

John Kingston
Department of Linguistics
University of Massachusetts



May 5, 2003—1:00 a.m.
The Mapping of Sound Structure to the Lexicon: Evidence from Normal Subjects and Aphasic patients.

Sheila Blumstein
Department of Cognitive and Linguistic Sciences
Brown University



March 19, 2003—1:00 a.m.
Interpreting and Anticipating Reference in Discourse.

Elsi Kaiser
Department of Linguistics
University of Pennsylvania



March 17, 2003—1:00 a.m.
The Horror: Speech Errors and Phonological Production Models.

Harlan Harris
Department of Linguistics
University of Illinois, Urbana-Champaign



February 7, 2003—1:00 a.m.
Relating Attention to Intention of Information Structure

Craige Roberst
Department of Linguistics
Ohio State University



January 31, 2003—1:00 a.m.
Presupposition: The Interaction of Conventional and Conversational Implicature.

Craige Roberts
Department of Linguistics
Ohio State University



January 28, 2003—1:00 a.m.
Information Structure in Discourse: A Basic Pragmatic Framework.

Craige Roberts
Department of Lingustics
Ohio State University



September 25, 2002—1:00 a.m.
Plasticity and Nativism: Towards a Resolution of an Apparent Paradox.

Gary Marcus
Department of Psychology
New York University


2001–2002


June 18, 2002—1:00 a.m.
How Do Readers Compute Word Meanings? Insights From the Triangle Model.

Mike Harm
Carnegie-Mellon University



May 21, 2002—1:00 a.m.
What Language Processing Tells Us About Cognitive Science.

Tom Bever
Department of Linguistics
University of Arizona



May 20, 2002—1:00 a.m.
American Landscape Painting: Aesthetics, The Golden Mean and Depth Perception.

Tom Bever
Department of Linguistics
University of Arizona



April 22, 2002—1:00 a.m.
Resource Logic

Asu Asudeh
Department of Linguistics
Stanford University



April 19, 2002—1:00 a.m.
It's Pat - Sexing Faces Using Only Red and Green.

Michael Tarr
Cognitive and Linguistic Sciences
Brown University



April 11, 2002—1:00 a.m.
To Sign or Not To Sign: Studies of Deaf Cognition in British Signers.

Matt Dye
Centre for Deaf Studies
University of Bristol, United Kingdom



April 1, 2002—1:00 a.m.
Possessives in Context

Gianluca Sorto
Department of Lingustics
University of California at Los Angeles



March 28, 2002—1:00 a.m.
Symbolically Speaking.

Franklin Chang
Department of Psychology
University of Illinois at Urbana-Champaign



March 25, 2002—1:00 a.m.
Understanding Intonational Phrasing

Duane Watson
Department of Brian & Cognitive Sciences
Massachusetts Institute of Technology



March 4, 2002—1:00 a.m.
Another Look at Accented Pronouns: Evidence from Eye-tracking

Jennifer Venditti
Department of Linguistics
Ohio State University



February 26, 2002—1:00 a.m.
The Role of Distributional Information in Speech Production: The Case of Subject-Verb Agreement.

Todd Haskell
Department of Psychology
University of Southern California



January 26, 2002—1:00 a.m.
Lexical Access and Serial Order in Language Production: A Test of Freuds Continuity Thesis.

Gary S. Dell
Beckman Institute
University of Illinois, Urbana-Champaign



November 7, 2001—1:00 a.m.
Constraint Satisfaction Processes in Language Production

Maryellen McDonald
Department of Psychology
University of Wisconsin, Madison



October 31, 2001—1:00 a.m.
Clock Talk

J. Kathryn Bock
Beckman Institute
University of Illinois, Urbana-Champaign


2001


April 25, 2001—1:00 a.m.
Understanding Spoken Words: Activation, Competition and Temporary Memory in Spoken Word Perception.

Paul Luce
Department of Psychology
State University of New York, Buffalo



April 2, 2001—1:00 a.m.
Optimality in Linguistic Cognition

Paul Smolensky
Department of Cognitive Science
Johns Hopkins University


1999–2000


April 25, 2000—1:00 a.m.
He vs. She: The Use of Gender in On-line Pronoun Comprehension

Jennifer Arnold
Department of Psychology
University of Pennsylvania



April 13, 2000—1:00 a.m.
The Processing of Temporal Relations in Discourse

Michael Walsh Dickley
Department of Linguistics
Northwestern University



March 24, 2000—1:00 a.m.
Doing OT in a Straitjacket

Jason Eisner
Department of Computer Science
University of Rochester



March 1, 2000—1:00 a.m.
Very Early Parameter Setting in the Computational System of Language, Varriablity in Development Across Languages, Maturation versus Learning, Impaired Development, and the Potential for a Genetics of Language

Kenneth N. Wexler
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology



January 17, 2000—1:00 a.m.
What Gesture Can Tell Us About the Process of Verbalization of Spatial Information

Kita Sotaro
Max Planck Institute for Psycholinguistics



September 17, 1999—1:00 a.m.
wo Ideas About Timing in Hearing and Speech.

David Poeppel
Department of Linguistics
University of Maryland - College Park


1998–1999


May 3, 1999—1:00 a.m.
Burnt and Splang: Some Issues in Morphological Learning Theory

Bruce P. Hayes
Department of Linguistics
University of California at Los Angeles



April 14, 1999—1:00 a.m.
The Navajo Prolongative and Lexical Structure

Carlota S. Smith
Department of Lingustics
University of Texas - Austin



March 7, 1999—1:00 a.m.
Surface Cues for Pragmatic Inferences as Motivation for the Evolution of Surface Syntax

Gert Webelhuth
Department Of Lingusitics
University of North Carolina - Chapel Hill



February 25, 1999—1:00 a.m.
Judgment Types, Causatives, and S-Selection

John W. Moore
Department of Linguistics
University of California at San Diego



February 23, 1999—1:00 a.m.
Modality-free Phonology

Harry van der Hulst
Department of Linguistics
Leiden University



December 18, 1998—1:00 a.m.
Articulatory Correlates of Ambisyllabicity in English Glides and Liquids

Bryan Gick
Haskins Laboratories
University of Connecticut


1997


September 19, 1997—1:00 a.m.
Necessity, A Priority, and What Is Said

Jason Stanley
Department of Philosophy
Cornell University


1996–1997


April 25, 1997—1:00 a.m.
Contact-induced Language Change and Contact-language Genesis

Sarah (Sally) Thomason
Program in Linguistics
University of Michigan



April 22, 1997—1:00 a.m.
Glides, Vowels, and Ghost Consonants in Argentinian Spanish

Ellen M Kaisse
Department of Linguistics
University of Washington



March 28, 1997—1:00 a.m.
When a Dog is a Cat and a Rug is a Fug: Picture Naming Errors in Aphasic and Non-aphasic Speakers

Myrna Schwartz
Moss Rehabilitation Research Institute



November 22, 1996—1:00 a.m.
Modeling Collaboration for Human-computer Communication

Barbara J Grosz
Department of Computer Science
Harvard University



October 30, 1996—1:00 a.m.
What Infants Remember About Utterances They Hear

Peter W Jusczyk
Department of Psychology
Johns Hopkins University



October 18, 1996—1:00 a.m.
The What and Why of Compositionality

Zoltan Szabo
Department of Philosophy
Cornell University



September 27, 1996—1:00 a.m.
States, Events, Time, Tense and Other Monsters

Graduiertenkollegs Integriertes Linguistik-Studium

Graham Katz
University of Tuebingen


1995–1996


April 26, 1996—1:00 a.m.
Symbols and Simple Recurrent Networks in Language and Cognition

Gary F Marcus
Department of Psychology
University of Massachusetts - Amherst



April 19, 1996—1:00 a.m.
Structural Repetition as Implicit Learning

J Kathryn Bock
Department of Psychology
University of Illinois - Urbana-Champaign



March 29, 1996—1:00 a.m.
A Probabilistic Model of Lexical and Syntactic Access and Disambiguation

Dan Jurafsky
Department of Linguistics
University of Colorado - Boulder



December 8, 1995—1:00 a.m.
The Problem with Attitudes

Jennifer Saul
Department of Philosophy
University of Sheffield



November 17, 1995—1:00 a.m.
Analysis, Synonymy, and Sense

Mark E. Richard
Department of Philosophy
Tufts University



September 15, 1995—1:00 a.m.
Theoretical Issues in Syntax

Yuki Kuroda
Department of Linguistics
University of California at San Diego


1994–1995


May 26, 1995—1:00 a.m.
The Past, Present and Future in Language Production

Gary S Dell
Beckman Institute, University of Illinois - Urbana-Champaign



April 28, 1995—1:00 a.m.
The Phrase Structure of Quantifier Scope

Tim Stowell
Department of Linguistics
University of California at Los Angeles



April 12, 1995—1:00 a.m.
Birds, Bees, and Semantic Theory

David Dowty
Department of Linguistics
Ohio State University



March 3, 1995—1:00 a.m.
Case in Human Grammar

Itziar Laka
Department of Linguistics
University of Rochester



February 10, 1995—1:00 a.m.
Verbal Plurality and Conjunction

Peter Lasersohn
Department of Linguistics
University of Rochester



February 3, 1995—1:00 a.m.
A Minimal Theory of Adverbial Quantification

Kai von Fintel
Department of Linguistics
Massachusetts Institute of Technology



December 16, 1994—1:00 a.m.
Episodic -ee in English: An Argument That Thematic Relations Can Actively Constrain New Word Formation

Chris Barker
Department of Psychology
University of Rochester



December 9, 1994—1:00 a.m.
The Marked Effect of Number on the Production of Subject-verb Agreement

Kathy Eberhard
Department of Psychology
University of Rochester



December 2, 1994—1:00 a.m.
The Many Meanings of Demonstratives

David Braun
Department of Philosophy
University of Rochester



November 4, 1994—1:00 a.m.
Using Eye-movements to Study Spoken Language Comprehension in Visual Contexts

Michael K. Tanenhaus
Department of Psychology
University of Rochester



October 25, 1994—1:00 a.m.
What Are Thematic Roles?

Greg Carlson
Department of Linguistics
University of Rochester



October 21, 1994—1:00 a.m.
The TRAINS Project

James F Allen
Department of Computer Science
University of Rochester



October 14, 1994—1:00 a.m.
Creolization and Some Thoughts About Learning

Elissa L Newport
Department of Psychology
University of Rochester



September 30, 1994—1:00 a.m.
Wh-Questions and Related Constructions in ASL

Karen Petronio
Department of Psychology
University of Rochester



September 23, 1994—1:00 a.m.
Distributional Intimations of Grammatical Reclassification

Whitney Tabor
Department of Psychology
University of Rochester