Distinguishing Linguistic and Processing Explanations of Grammar Chien-Jer Charles Lin National Taiwan Normal University [email protected]

Abstract In this chapter, we examine the processing foundation of grammaticality judgments and linguistic explanations. Theories of grammar are divided into three types: Universal Grammar Par Excellence (UGPE), Mental Grammar (MG), and Grammar in Use (GU). Five processing factors in MG are discussed, including the selective and inhibitory nature of human cognition, the processability of linguistic materials, the mind as a statistical machine, the shallowness of processing, and the brain being constrained by working-memory capacities. To study the essential properties of the UGPE, we contend that one has to understand the processing power of the brain (i.e. the MG) which enables and constrains language.

* Preparation of this paper was supported by research grants from the National Science Council of Taiwan (NSC 95-2411-H-003-056; NSC 96-2411-H-003-035). I thank participants at the IWGE for useful comments. I am particularly indebted to Li-Hsin Ning for discussions on resumptive pronouns, and James Myers for stimulating thoughts in this workshop. Correspondence should be addressed to Chien-Jer Charles Lin, Department of English, National Taiwan Normal University, Taipei 106, Taiwan, or [email protected].

1

1. Grammar and Using Grammar An important issue at the center of linguistic inquiries is the distinction between what belongs to grammar and what does not. Linguists define their job as providing theoretical explanations for language-related phenomena, the supposedly unique property of the human species (Hauser, Chomsky, & Fitch, 2002). Data that are not considered relevant to grammar are discarded as unimportant or peripheral. Linguists, nevertheless, debate on what counts as the core phenomenon pertinent to grammar. Different theoretical inclinations motivate collections for data of very different kinds. For any linguistic analysis to be convincing, however, it is crucial to consider both the validity of the data a theory is based on and the linking assumptions that underlie the explanations provided. Linguists across the board would agree on the need to distinguish between what is grammar and what is not. Formal linguists flesh out explanations internal to grammatical structure. Figuring out the machinery that underlies a range of introspective data has been a common methodology. The interfaces between grammar and other modalities, e.g. the sounds and the conceptual systems (Chomsky, 2005, 2006), enlighten what grammar is and is not. Functionalists, on the other hand, aim at minimizing grammar by attributing language-related phenomena to semantics and communication. Their goal is to reduce grammar as epiphenomenon derivable from other modalities. A functionalist collects data of actual linguistic productions, often corpora based on written texts of different types and recordings of naturally-occurring conversations. The goal is to provide communicative explanations for patterns in actual language use. The different agendas of formalists and functionalists ultimately led to the different methodologies and core data each camp adopts. Formalists discard data that are contaminated by the human factor. The goal is to have a theory of grammar that is not dependent on the performance of a particular group of humans at a particular point of time. Functionalists, on the other hand, focus on data from actual linguistic productions. They criticize formalists for creating linguistic data that suits the theory and not considering actual linguistic performance. Functionalists dismiss introspective data as “unreal” since these intricate introspective data are not easily found in actual language production. Regarding formulating explanations, formalists see functional explanations as unnecessarily distracted by details that do not pertain to the essence of the language faculty. Functionalists criticize formalists for valuing theory over data. They question the predictive power of formalist theories by providing production data that cannot be easily accounted for (e.g. Cullicover & Jackendoff, 2005, 2006). The goal of this chapter is not to favor one theory over the other. As Her and Wan (2007: 90) nicely put it, the difference between formalist and functionalist theories “is a matter of personal taste in terms of the object of study, not a choice between right and wrong.” Nevertheless, analyses in each theoretical camp ought to be evaluated under the same tenets in terms of their scientific stringency. These naturally include the methodology of data collection and the rigorousness of argumentation. Thus, one needs to ask the following questions: Does the crucial data suit the purpose of the study? Does the data enlighten the issues at hand? Is the data collected with appropriate methodology? Are there confounds in the proposed analyses for the data? This chapter is concerned with both data and explanation in linguistic analyses. I examine what counts as valid data for a valid linguistic explanation. In particular, we focus on the distinction between linguistic and processing explanations for language. The goal is to discuss potential processing confounds that should be avoided in linguistic analyses. Vigilance over

2

these processing factors is neutral to theory as formalist and functionalist explanations are equally likely to be contaminated by these processing factors. Thesis I: Grammar resides in the head. Knowledge of language (linguistic competence) and the use of it (linguistic performance) have long been distinguished in modern linguistics (de Saussure, 1915; Chomsky, 1957). In the recent political demarcation of the field, these two sides have evolved into opposing theoretical poles. Generative linguistics is concerned with the knowledge of language; functional linguistics, the use of language. Recognizing that competence and performance ought to be distinguished is an important breakthrough in the scientific investigation and theorization of language. Linguistic competence is composed of the computational system that combines linguistic symbols in a regulated manner. Studying linguistic competence sheds light on this mental system that is capable of joining linguistic symbols into hierarchical structures. The fundamental disparity between formal and functional linguistics lies in whether this symbolic system is purely computational and should be studied independently from the actual use of language. In the natural sciences, relatively little attention has been paid to the distinction between laws of nature and how human beings carry out or interact with these principles.1 As for language, it is still controversial regarding how much the human factor has to play in it. The critical difference between the (other) natural sciences and linguistics is that while most natural phenomena can be observed and measured objectively (within the limits of the instruments invented by humans, nevertheless), linguistic data can only be obtained through the human mind. Linguistic data are obtained either through introspection or by recordings of natural communicative productions. The human factor inevitably enables and constrains the linguistic data that is available. In the following, a three-way distinction of grammar (1-3) is proposed for the investigation of language: (1) universal grammar par excellence (UGPE), (2) mental grammar (MG)—grammar (processed) in linguistic competence, and (3) grammar in use (GU)—linguistic performance for communicative purposes. UGPE contains only the essential properties of formal grammar without the rendering of mental processing or performance. It corresponds to the “faculty of language in the narrow sense (FLN)” of Hauser et al. (2002: 1571), namely the “abstract linguistic computational system alone, independent of the other systems with which it interacts and interfaces.” UGPE is the essential computational system that generates Newmeyer’s (2005) possible languages—languages that are predictable by UG principles. The formal mechanisms of UGPE are the target of most generative linguists. MG refers to grammar that we are able to cognize in the mind. Thus processing factors such as our experiences with patterns in language, the computing power of the brain, reasoning, decision-making, and statistical learning all operate at this level. In the typology of languages, MG is reflected by the skewed distribution of linguistic patterns across languages with different degrees of processibility and plausibility (e.g. certain word orders are more frequent than others; see Greenberg, 1963). Newmeyer (2005) refers to this statistical skew in 1

In the philosophy of science, however, the subjective role that the human investigator plays has been considered an important factor in construing nature and reality (Feyerabend, 1975; Hanson, 1958; Popper, 1968). This view is in affiliation with the constructionist approach to human knowledge—that objective knowledge does not exist as knowledge is always subjectively constructed by the human mind.

3

linguistic patterning as probable languages: some language types are more likely to occur than others, and processing, according to Newmeyer, is a reasonable cause. GU refers to language in actual use for communication, in particular, how language is produced in natural communicative situations. GU has been the goal of functionalist research. Primary research issues include form-meaning associations (e.g. pairings of sound and meaning), semantic plausibility in terms of encyclopedic knowledge, how linguistic expressions can fossilize into linguistic units (called “lexicons” in the traditional sense, and “constructions” in the sense of Adele Goldberg’s Construction Grammar, 2006), rhetoric, and how part of the grammar is used for specific communicative purposes (e.g. Duranti, 1994). Let’s now recapitulate on how the traditional competence-performance distinction and the FLN/FLB distinction of Hauser et al. (2002) are preserved and improved in the three-way distinction hereby proposed. The traditional distinction between linguistic competence and linguistic performance focuses on the differences between language in the mind and language uttered in real life. Strictly speaking, both competence and performance in this traditional sense are already instances of grammar put into use. Linguistic competence refers to grammar (i.e. UGPE) placed under the management of the mental system upon which introspections are based; linguistic performance refers to this mental grammar put into actual utterances for communicative purposes. Both linguistic competence and performance are constrained by the mind’s processing power. In order to clarify on the role the mind plays in grammar, it is necessary to refine linguistic competence in the traditional sense as being composed of two parts: UGPE—grammar as it is, and MG—grammar residing in the brain. Linguistic performance in the traditional sense is retained as GU, which refers to grammatical patterns in communicative discourse. Hauser et al.’s (2002) distinction between the faculty of language in the narrow sense (FLN) and the faculty of language in the broad sense (FLB) reiterates the importance of isolating UGPE (as contained in FLN) from other factors. They suggested that recursion is the main property of FLN. FLB includes FLN plus the interfacing modalities, such as the conceptual-intentional system (CI) and the sensory-motor system (SM). Crucially, however, in the research program of FLN and FLB, memory and respiration (along with digestion and circulation) are taken as “organism-internal systems” that do not belong to FLB; they are considered “necessary but insufficient for language (1571).” Even though we are sympathetic to the need to distinguish these interfacing modalities from other organism-internal factors, we find it risky to disregard how processing constraints (in terms of memory) can affect the computational power of the mind, thus the structuring of language.2 Therefore, we propose that MG should be investigated as a level that is separate from UGPE. In the following, I discuss how various processing factors can affect grammaticality judgments at the level of MG. Taking MG as a distinct level of linguistic inquiry as well as a level of grammar that is distinct from both UGPE and GU provides a promising route that clarifies contributions to the faculty of language at different levels. 2. The Mental Grammar When grammar is processed by the human mind, it is inevitable that the brain puts constraints on how language users think about it. Introspective grammaticality judgments have been a common way to gather core linguistic data in formal linguistic analyses. Nevertheless, intuitive judgments provide data not exclusively at the level of UGPE, but at the level of MG. 2

Systems mentioned as “external” to FLB include organism-internal systems like memory, digestion, circulation, and respiration, and factors in the external environment such as factors that are ecological, physical, cultural, and social (see Figure 2 of Hauser et al. 2002: 1570).

4

Thesis II: Grammaticality judgments are introspections on the mental grammar. The data that linguistic analyses are based on are data already filtered by the human brain. Crucial linguistic data in formal analyses are often convoluted and hardly observable in production data; therefore, introspection becomes the only way by which such data can be obtained. Introspection has, nevertheless, been criticized for its arbitrariness and circularity in linguistic argumentation—linguists produce data that support the linguistic theories they argue for (see critiques from Edelman & Christiansen, 2003: 60, and the reply from Phillips & Lasnik, 2003). It ought to be clarified that introspective data are no less real than data collected from actual language use. These are merely data of different types, obtained at different levels of grammar in use. Introspective grammaticality judgments are closer to UGPE in that they are UGPE processed by the mind. Therefore, they are at the level of Mental Grammar. Actual performance data (i.e. GU) are farther away from UPGE as they are UGPE processed by the mind and put into communicative use. In pursuing UGPE by using grammaticality judgments, it is necessary to rid out potential interferences, including both processing constraints and irrelevant communicative factors. To reach a certain level of objectivity, informal and formal solicitation of grammaticality judgments among friends and colleagues, or through questionnaires have been adopted.3 Even though collective judgments reduce the possibility of idiosyncratic biases, they do not eradicate the possibility that people may collectively make a mistake or be skewed toward certain judgments based on the processability of the materials. Attempts have also been made to not overtly ask people about their intuitions on grammaticality but let them read sentences on-line with the reading behaviors recorded. These tasks help alleviate the awkwardness of directly asking about grammaticality.4 The assumption is that the sentential regions that are grammatically complex (or less common) induce longer reading/response times. Even though these tasks seem more natural than asking for grammaticality judgments, the possibility of other explanations for the experimental materials should always be sought through before one can resort to purely linguistic explanations.5 In the following, I discuss five important factors within MG.6 These factors essentially concern how the mind works, nevertheless casting significant impact on the way people feel about sentences when they provide grammaticality judgments. Such factors ought to be teased out as processing confounds operating at MG and not be confused with true linguistic explanations of grammar, which are the targets of UGPE.

3

Refer to Cowart (same volume) for a tutorial on issues related to questionnaires on grammaticality judgments. Dora Alexopoulou and Frank Keller (2007) serves as an example that uses Magnitude Estimation (Cowart, 1997) by including in the questionnaires the baseline sentences with which participants can compare their judgments of each sentence. 4 This is especially important given that not all language users are equally familiar with the concept of grammaticality. In some languages (e.g. Mandarin Chinese), grammaticality has to be rephrased as “acceptability”, “naturalness”, and “plausibility”; each of these terms are essentially distinguishable from the intended sense of grammaticality (i.e. syntactic well-formedness). In these languages, it is crucial to make sure that the basis of the judgments is truly syntactic, not semantic, or encyclopedic. 5 In fact, the research program depicted by Hauser et al. (2002) provides a good blueprint for clarifying language-relevant factors at different levels. FLN is minimal; only those that cannot be found across species and across modalities are left as FLN. Research on FLB and other factors will ensure the minimality of FLN. 6 Schütze (1996) discusses various empirical issues that should be considered in collecting grammaticality judgments. These include factors that are related to the subjects, experimental materials, mental states, etc.

5

2.1. The mind selects and suppresses. At various levels of processing, from visual perception to high-level syntactic processing, research has shown that the mind not only activates the nodes associated with the target, but suppresses the competitors. Let us consider the perception of the ambiguous image in Figure 1, where the figure (the black image shaped by the border) and the ground (the white background outside of the border) can be perceived separately as meaningful objects while both are difficult to perceive simultaneously. Recent theories such as the Parallel Interactive Model of Configural Analysis (PIMOCA) advanced by Peterson and colleagues (Peterson & Kim, 2001; Peterson & Skow-Grant, 2003) proposes that the border that divides the figure from the ground serves as “configural cues on both sides” (Peterson & Skow-Grant, 2003: 8, emphasis original).

Figure 1. Figure-Ground distinction (adopted from Peterspn & Grant, 2003: 8). Countering the traditional Gestalt view that the assignment of the figure should precede that of the ground, PIMOCA contends that the perception of the figure and the ground involves a “cross-border competition”: “According to PIMOCA, configural cues present on the same side of an edge cooperate with each other, whereas configural cues present on opposite sides of an edge compete with each other. When the cues are unbalanced, the cues on the more weakly cued side are inhibited by the cues on the more strongly cued side.” (Peterson & Skow-Grant, 2003: 8) Peterson and Kim (2001) used figure-ground images like those in Figure 2 as the priming images. In unbiased situations, the figure (i.e. the black portion) is more likely to stand out than the ground, even though these figures were all novel, unfamiliar objects. They used 2AC as the control primes (where neither the figure nor the ground was related to the target object) and 2D as the experimental prime (where the ground, e.g. an anchor, precedes the target object—an anchor), each appearing for 50 msec prior to the target images.

6

PRIMES TARGETS Figure 2. Figure-Ground images studied by Peterson & Kim (2001). The participants were instructed to make decisions on whether the target images were familiar or novel objects. They found that after viewing the primes, the participants would focus on the figure and suppress the ground. Thus a target image that matched the ground of the prime took longer to recognize than a control target. The effect of inhibitory priming was found when the prime and the target was separated by intervals of 33 msec and 0 msec. This showed that in the perception of ambiguous figure-ground images, even though only the figures were recognized, the ground images were simultaneously perceived and inhibited. As a result, the inhibitory effect surfaced when a suppressed target image appeared and required active recognition. Such inhibitory effects can also be found in language processing and grammaticality judgments. An ambiguous sentence tends to be perceived as unambiguous if one of the interpretations is dominant over the other. Langendoen (1972) discussed grammaticality judgments in ambiguous sentences like (4): (4) Mary takes Nancy seriously, but Ollie lightly. Potentially, (4) is ambiguous as it can be derived by either applying Conjunction Reduction to (5), or applying Gapping to (6). (5) Mary takes Nancy seriously, but Mary takes Ollie lightly. [conjunction reduction] (6) Mary takes Nancy seriously, but Ollie takes Nancy lightly. [gapping] However, grammaticality judgments would overwhelmingly take (5) as the only possible interpretation. The question is why there is such a preference and whether we should provide a linguistic explanation (i.e. at the level of UGPE) for this preference. One linguistic account offered by Hankamer, as reviewed by Langendoen (1972), was that, (6) should be ruled out by grammar. Therefore, grammar needs to adopt conjunction reduction and overrule gapping in derivation. However, this seems an unsatisfactory solution at the level of UGPE as it burdens grammar with excessive look-ahead computation and rule-rankings that are unnecessarily powerful. A simpler account is to allow both interpretations to coexist at the level of UGPE and account for the preferences at the level of MG or GU. Langendoen explicates that the unavailability of the reading in (6) was an effect of rhetoric (therefore an issue not of UPGE but of GU). When appropriately contextualized

7

(compare 7-8 with 9-10), gapping (9-10) becomes not only available but the preferred interpretation: (7) Mary takes life seriously, but Mary takes Ollie lightly. (8) Mary takes life seriously, but Ollie lightly. [conjunction reduction] (9) Mary takes life seriously, but Ollie takes life lightly. (10) Mary takes life seriously, but Ollie lightly. [gapping] What Langendoen meant by “effect of rhetoric” is actually an effect of encyclopedic knowledge. Grammar has nothing to do with the preferential interpretations. The knowledge that Ollie, as a person, is more likely to take life seriously than be taken seriously by Mary helps promote the gapping interpretation. Still, we are left with the question of why (5) is preferred to (6) in neutral contexts. A potential processing account for this interpretative preference is at the level of MG—that the dominant NVN sequence in English prefers a subject-verb ellipsis (i.e. conjunction reduction) to a verb-object ellipsis (i.e. gapping). In English, NVN sequences are commonly analyzed as Subject-Verb-Object and interpreted as Agent-Verb-Patient. Thus the elliptical regions are preferably analyzed as Subject-Verb so that they parallel the first clause, replacing only the object.7 We will elaborate on the effect of canonical structures in 2.3. Another example discussed by Langendoen (1972) concerns the ambiguity of (11): (11) Some doctor sees every patient. This example has been pervasively studied in the formalist literature as an example of covert movement (e.g. quantifier raising) at LF resulting in wide-scope versus narrow-scope interpretations of every. One reading is that there is one doctor that sees all the patients. The other interpretation has every patient scope over some doctor and thus creates the reading of every patient is seen by a (possibly different) doctor. It is reasonable that any grammatical framework ought to be able to derive both readings. Nevertheless, when people read (11), the first reading unanimously overrides the second reading. A processing explanation for this effect may be that to arrive at the second interpretation, the quantifier every has to be raised, which is an extra step. In addition, the LF of the first interpretation matches the surface structure of the sentence better. This preference is an issue of MG, not UGPE, since UGPE only needs to derive both interpretations. A processing account in MG is responsible for why one interpretation is preferred to the other. Thesis III: A parse that is grammatical can be suppressed by a competing parse that is more dominant. The lesson from the above discussion is that a sentence that is considered ungrammatical or an interpretation considered implausible may not be an issue of grammar. The ungrammaticality could have resulted from a dominant analysis being too salient, thus suppressing other competing analyses. Such judgments of grammaticality are results of competitions and inhibitions in processing and have nothing to do with the intrinsic nature of grammar.

7

In contrast, a structural account (at the level of UGPE) should preferred VO to be one unit as they compose constituents in a grammatical theory.

8

2.2. Processability enhances grammaticality. Grammaticality judgments naturally aims at probing into the grammaticality of a sentence based on principles of the UGPE. Nevertheless, processability and degree of grammatical violation both affect grammaticality judgments. Whether an ungrammatical sentence that sounds processable (and thus interpretable) may be judged as grammatical and vice versa is an issue that should be considered in any argumentation that is based on data of grammaticality judgments. In this section, we discuss resumption in English relative clauses as an intriguing example of how processability interferes with grammaticality judgments. We leave the issue of unprocessability and ungrammaticality for sections 2.3 to 2.5. A resumptive pronoun is a pronoun that appears at the extracted gap position of a relative clause. In most cases, sentences with a resumptive pronoun are less acceptable than those with a gap (e.g. 12). However, sentences like (13) require a resumptive pronoun to sound grammatical. (12) a. That’s the guy that I saw t yesterday. b. *That’s the guy that I saw him yesterday. (13) a. *That’s the guy that I don’t know when t came. b. That’s the guy that I don’t know when he came. In English syntax, resumption serves as a device that rescues the problem of extractability. (13a) violates the island condition on movements; therefore, the resumptive pronoun in (13b) rescues this violation (McKee & McDaniel, 2001). McKee and McDaniel followed Kayne (1981) in seeing resumptions as Spell-Outs of traces. They argued that in normal cases like (12), having a resumptive (i.e. spelling out the trace) is more costly than not having one; therefore (12a) blocks (12b). However, in cases where relativization violates grammatical constraints (e.g. island) as in (13a), the only grammatical output is (13b), which, though costly, is not blocked by any other derivation that is less costly. Therefore, (13b) is considered grammatical. The issue of resumption is actually at the fine line between grammaticality, acceptability, and processability. While (13a) is obviously ungrammatical, does the greater acceptability for (13b) warrant its status of grammaticality? What does a spelt-out trace do to make (13) better? Are sentences that are comparatively more acceptable necessarily grammatical? This issue is further complicated by the processing advantage that resumption induces. McKee and McDaniel discussed sentences in (14), where both gapped and resumptive relative clauses sound acceptable. (14) a. This is the boy that, whenever it rains, t cries. b. This is the boy that, whenever it rains, he cries. (14a) is itself a grammatical sentence; (14b) is considered acceptable because the antecedent and the gap are far enough. Consider the ungrammaticality of (15), which bears identical semantics to (14b), only differing in the distance between the filler and the gap. (15) *This is the boy that he cries whenever it rains. The contrast between (14) and (15) shows that resumptive pronouns function as locators for the relativized gaps. It enhances the processability of a relative clause when the filler-gap distance is long and when the processor needs to be reminded of the location of the gap.

9

McKee and McDaniel (2001) provided a similar explanation based on active working memory: “… when a clause containing an antecedent is shunted out of active memory, the resumptive pronoun sounds better it otherwise would in extractable sites. For adults, shunting occurs when a third clause is reached, whereas for children shunting might occur one clause earlier. (149)” Resumption, thus, blurs the simple distinctions between grammaticality, acceptability, and processability. On the one hand, it rescues grammatical violations, making extractions that are ungrammatical sound better. On the other, it alleviates processing difficulty, facilitating fillergap integrations that are beyond the limits of active working memory. In fact, although sentences like (13b) were seemingly rescued by a resumptive pronoun in terms of local island violations, its grammaticality is not necessarily guaranteed. Resumption makes (13b) more comprehensible than (13a); it remains mysterious, nevertheless, whether one is more grammatical than the other. As a consequence, it remains difficult to determine whether the distinction between (13a) and (13b) should be an issue of the UGPE. In fact, it seems more reasonable to take resumption in all these cases as an issue of the MG, not the UGPE. In island violations, resumption partially alleviates the severity of ungrammaticality with enhanced processability (or comprehensibility). In long-distance filler-gap integrations, resumption enhances the saliency of the gap position as well as its participant role in the syntactic structure, thus making the sentence more processable. Grammaticality may not be a central issue in English resumption since a resumptive pronoun is better taken as an MG device that enhances processing. When making grammaticality judgments, the mind takes a device that enhances processing as a device that enhances grammaticality. Thesis IV: Processability enhances grammaticality. We can therefore see the processability of a sentence as a force that enhances the sense of well-formedness in grammaticality judgments as well. An ungrammatical sentence becomes more grammatical when it is more processable. Resumption is a device that enhances processability, not grammaticality, though grammaticality is likely to be seemingly enhanced as processability increases. 2.3. The mind counts and expects. Frequency and familiarity are both fundamental effects in processing of various sorts. Word frequency effect is robust in lexical access; words of higher frequency are named faster (Forster & Chambers, 1973) and recognized faster in lexical decisions (Rubenstein, Garfield, & Millikan, 1970). The subjective familiarity of words has also been demonstrated to be effective in lexical access (Connine et al., 1990; Nusbaum & Dedina, 1985) with words that are more familiar accessed with greater ease. Keeping track of frequency and linguistic experiences can be traced back to our ability to track statistical distributions of instances and formulate hypotheses about patterns and structures in the linguistic inputs. Statistical learning, for instance, has been demonstrated as a strategy adopted by infants in dealing with phonological inputs and recognizing structural patterns (Saffran, Aslin, & Newport, 1996;

10

Aslin, Saffran, & Newport, 1998) though this ability is not specific to the human species (Hauser et al., 2001).8 In sentence processing, likewise, the syntactic parser is sensitive to the recency and frequency of the syntactic patterns in sentences. This has been demonstrated by the syntactic priming effect in both sentence comprehension and production, and the classic garden-path effect in sentence comprehension. Syntactic priming effect was primarily found in language production (Bock, 1986; Branigan, Pickering, & Cleland, 2000; Loebell & Bock, 2003; Pickering & Branigan, 1998; among others). In these studies, participants were first presented with sentences of a certain syntactic pattern. When these participants were asked to produce verbal responses required by the experimental tasks (e.g. describing pictures, answering questions, etc.), they tend to adopt the syntactic patterns that they just heard. The effect of syntactic priming is essentially a recency effect; it shows that at the syntactic level, language users are prone to recycling syntactic structures that they experienced recently. This effect has also been extended to the domain of sentence comprehension. Ledoux, Traxler, & Swaab (2007), for instance, found that when their participants read reduced relative clauses preceded by main-clause primes, a larger positivity on ERPs was observed than when these reduced relative clauses were preceded by reduced-relative primes. The syntactic structure of the prime sentences facilitated the processing of the upcoming sentences that bear similar structures. The second type of evidence that demonstrates the importance of syntactic experiences in sentence comprehension is the classic garden-path effect. The sentence parser can often be misled into wrong analyses by following the route that is most often taken in parsing.9 Take the classic example the horse raced past the barn fell (Bever, 1970; Townsend & Bever, 2001); the first six words (the horse raced past the barn) are most naturally parsed as a main clause that follows the predominant NVN sequence in English. Since most sentences in English follow this canonical NVN pattern (which is most often interpreted as Agent-ActionPatient), the parser is prone to adopting this analysis on the linguistic inputs.10 The appearance of the actual main verb fell poses a challenge to the initial analysis, requiring a reanalysis. This reanalysis, however, is rarely achieved. Even though a reduced relative reanalysis is required to rescue the ungrammatical parse of this sentence, most language users are not able to do so without specific instructions. Gardenpath sentences like this are taken as ungrammatical, not because of its actual ungrammaticality but because of the initial misanalysis being so dominant that a correct syntactic analysis (which is much less common) can never be achieved. This garden-path effect, though seemingly trivial, can have serious impact on grammaticality judgments. In the following, I take resumption in head-final relative clauses as an illustration. In section 2.2, we already saw that resumption in English challenges the fine distinction between grammaticality and processability. Resumption in head-final relative clauses is additionally complicated by the issue of garden path. A crucial difference between headinitial and head-final relative clauses is the linear relationship of the filler and the gap. In a head-initial relative clause, the antecedent (i.e. head noun) precedes the relative clause, in 8

See Yang (2004) and Gómez (in press) for useful reviews on statistical learning. Certainly, frequency of the syntactic patterns is but one of the many factors that guide the parser’s syntactic decisions. These other factors include, though are not limited to, contextual information (Crain & Steedman, 1985), lexical information (McDonald et al., 1994) and principles of minimality such as minimal attachment and late closure (Frazier, 1987). Also see McRae et al. (1998) for discussions on various constraints. Our discussion in this section focuses only on the top-down templatic effect in terms of syntactic structure. 10 Townsend and Bever (2001) recast this NVN heuristics of Bever (1970) as Pseudosyntax in their Late Assignment of Syntax Theory (LAST). In this theory of sentence comprehension, the inputs are quickly mapped onto the NVN pattern in Pseudosyntax. Then, at the second stage (called Real Syntax), reanalysis takes place if necessary. 9

11

which the gap is located. The parser encounters the filler first, and starts to search for a gap. In a head-final relative clause, however, the relative clause precedes the filler. Without overt marking on the left boundary of the relative clause, the parser is not certain of a relative clause until it reaches the relativizer and the head noun. This temporary ambiguity may induce a main-clause garden path at the prenominal relative-clause region. This contrast is illustrated in (16-17). (16) Filler-gap relationship in a head-initial relative clause: head_nounFILLER relativizer [… GAP …]RC FILLER --------------------------- GAP the guy that [ John met GAP yesterday] (17) Filler-gap relationship in a head-final relative clause: [… GAP …]RC relativizer head_nounFILLER GAP ---------------------------------------------- FILLER [Zhangsan zuotian pengdao GAP] de ren Zhangsan yesterday met relativizer guy ‘the guy that Zhangsan met yesterday’

<= English

<= Chinese

The comparison between (16) and (17) reveals the intrinsic difference in processing a filler-gap versus a gap-filler relationship. In a filler-gap relationship, the dependency is constructed following a gap-searching strategy like the Active Filler Strategy (Frazier & Clifton, 1989). Once a filler (and the relativizer) is encountered, it is postulated in the working memory as part of a dependency that should be resolved. The processor actively seeks a possible gap to complete this dependency. In a gap-filler relationship, however, the gap position (without any overt marking) cannot be detected or ensured until the relativizer and the head noun are reached. The gap itself does not initiate a filler-searching process. Temporary ambiguity and a main-clause garden path are more likely to take place in these head-final relative clauses. Lin and Bever (2007a) specifically looked into the issue of garden path in head-final relative clauses. We compared between normal self-paced readings of Chinese sentences containing relative clauses (with possibility of garden path at the relative-clause region) and reading sentences with specific instructions about where the relative clause is located in each sentence (so that the garden path is removed). The reading patterns showed a significant reduction in reading time at the relativizer and the head-noun regions when specific instructions on the existence and location of relative clauses were given. We concluded that in Chinese, where relative clauses precede the head nouns, the parser does experience uncertainty and possibly a main-clause garden path prior to the relativizer and head-noun regions. The parser initially takes the gapped clause to be a main clause. Reanalysis then takes place when the disambiguating regions (i.e. the relativizer and the head noun) are reached. This difficulty is reduced when the readers were instructed that they were reading relative clauses. This issue of garden path is worsened in head-final relative clauses that contain resumptive pronouns. Resumptive pronouns, again, are pronouns that fill the gap positions in relative clauses. The resumptive versions of (16) and (17) are given in (18-19): (18) Filler-gap relationship in a resumptive head-initial relative clause: head_nounFILLER relativizer [… PRONOUN …]RC the guy that [ John met HIM yesterday] <= English

12

(19) Filler-gap relationship in a resumptive head-final relative clause: [… PRONOUN …]RC relativizer head_nounFILLER [Zhangsan zuotian pengdao TA] de ren Zhangsan yesterday met him relativizer guy ‘the guy that Zhangsan met him yesterday’

<= Chinese

In head-initial relativization, a resumptive pronoun spells out the gap position, increases the saliency of the gap location and thus enhances the processing of the filler-gap dependency. In head-final relativization, however, a gap position that is filled by a resumptive pronoun strengthens the main-clause reading. In (19), for example, the relative clause initially reads like a main clause--Zhangsan met him yesterday, with him being a pronoun that does not have an overt referent in the immediate discourse.11 In Ning and Lin (2007), we looked into the issue of resumption in Chinese by collecting language users’ grammaticality judgments using questionnaires. In almost all the cases that we investigated, participants rated gapped relative clauses as more acceptable than resumptive relative clauses (e.g. 20). This was true even in subject-extracted relative clauses with enlarged filler-gap distances (by inserting multiple adjunct phrases) in (21). The use of resumption to enhance the processing of longer filler-gap dependencies in English was not replicated in Chinese. (20) [(ta) fangwen meiguo]RC de nawei jiaoshou … he visit USA rel that professor ‘the guy that (he) visited the USA …’ (21) [(ta) jidongdi zai huiyi shang zhuiwen jinfei liuxiang]RC de nawei weiyuan … he anxiously at meeting on ask money go rel that member ‘the member that (he) anxiously asked where the money went at the meeting …’ The only instance where resumption was better than gapped clauses was when the gap was preceded by a preposition as in (22). Since a gap cannot be case-checked by a preposition, a resumptive pronoun rescues this violation. We concluded that resumption in Chinese is primarily a grammatical device.12 (22) [houxuanren xiang (ta) wenhao]RC de nawei hushi … candidate to her greet rel that nurse ‘the nurse that the candidate greeted to (her) …’ In a follow-up study (Ning & Lin, 2008), we conducted self-paced reading tasks on these resumptive and gapped relative clauses. Consistent with the grammaticality judgment data, the results demonstrated significant increases of reading time on the head nouns (i.e. the disambiguating region) of relative clauses that contained resumptive pronouns. This can be interpreted as head-final relative clauses with resumptive pronouns being mistaken as main

11

Another issue in Chinese relative clauses has to do with the ambiguity of the relativizer de, which can be taken as a genitive marker. A potential ambiguous analysis in these relative clauses is to take the resumptive pronoun as a possessor and the head noun as a possessed NP. Therefore, [Zhangsan yesterday met him] de guy in (19) can be misanalyzed as Zhangsan yesterday met [he de](his) guy. Since this is not a result of a direct NVN misanalysis, we only briefly refer to this. 12 I leave the possibility open as to whether the grammaticality of resumption in Chinese is also interfered by the sense of processability.

13

clauses (because of the apparent argument-complete NVN sequences). When the head noun is reached, the parser encounters a challenge to the previous analysis and needs to reanalyze. Thesis V: Garden path reduces grammaticality. Literature on head-final resumptive relative clauses (e.g. Aoun and Li, 2003) rarely considered the issue of garden path as an interfering factor in grammaticality judgments. To objectively consider the grammaticality of these sentences, the parser needs to refrain from garden-path readings. Most of the judgments prefer the gapped conditions to resumptives. It is likely that misreadings based on the dominant NVN patterns in Chinese interfered with grammaticality judgments. Head-final resumption, thus, becomes a confounded case for reliable grammaticality judgments (in pursuit of an explanation purely based on the UGPE). To sum up, even though the parser is sensitive to frequency and recency effects of syntactic structures, such information should not be included in UGPE. These are, instead, candidate effects of MG, where statistical patterns affect parsing. These factors, nevertheless, have impact on grammaticality judgments. Sentences that contain common syntactic structures are more likely to be judged as acceptable. Sentences are liable to misreading tend to be judged as unacceptable even though they are grammatical. 2.4. Processing is shallow; thematic misassignments linger. In grammaticality judgments, it is usually assumed that the mind would resort to UGPE and seek an ultimate grammatical solution if such a solution is available. This assumption is challengeable as not all grammatical solutions are equally easy to get at and not all human processors are capable of the computation to arrive at a solution. When the load of processing involved is beyond the mental limits, the parser breaks down. Though a reanalysis is called upon, it may not be effectively executed. The processor terminates the parsing, acquiring fragmented interpretations of the sentence. Our earlier example the horse raced past the barn fell also serves as an example of this sort. Most people sensed the anomaly in the initial misparse; they were, nevertheless, not able to come to a grammatical solution. As a consequence, this sentence is usually understood as the horse raced past the barn (and) fell with and somehow missing. In addition to the oddity of the reduced relative construction, that the thematic roles were wrongly assigned also aggravated the garden path. Revision on thematic misassignment is costly. Human processors would rather stick to the initial (mis)analysis ignoring minor ungrammaticality than revising the thematic interpretations and syntactic structures altogether. Thesis VI: Thematic misassignments are costly to recover from. “Good-enough processing” and lingering misanalyses have been explored by Christianson et al. (2001). They examined garden-path sentences like (23). Following the dominant NVN sequence in English, an SVO analysis is naturally preferred in the initial analysis. The subject of the second clause gets “stolen” as the object of the initial subordinate clause in the garden path. As a consequence, the deer is initially interpreted as the animal hunted by the man, though such an analysis needs to be revised as soon as the main verb ran appeared. (23) While the man hunted the deer ran into the woods. S V O S V O

14

They found that even though successful syntactic reanalyses were ultimately achieved as the participants were able to correctly answer comprehension questions (e.g. did the deer run into the woods?), the initial incorrect thematic role assigned was resistant to reanalysis. Christianson et al. explicated that the human parser completed the syntactic reanalysis by making sure that there exists a subject for the matrix clause. The reanalysis then stops, leaving the initial thematic assignment untouched. That is, in (23), the interpretation that the man hunted the deer lingers till the end of the sentence (coexisting with the deer ran into the woods). This creates a “good-enough” interpretation that solved the syntactic problem of the matrix clause but left untouched the inconsistent thematic assignment in the subordinate clause: “The reanalysis, then, is good enough to take care of the severe syntactic problem posed by the subjectless verb and create a licit structure, but not good enough to lead to the construction of a syntactic representation and global interpretation for the entire sentence that are consistent with the input string (Christianson et al., 2001: 397).” An important implication of such good-enough processing is that the cognitive system does not necessarily arrive at “complete and detailed representations” of the inputs. Goodenough representations may be created as a partial solution to the issue at hand. In terms of grammaticality judgments, the linguistic inputs may not always be fully analyzed. Grammaticality judgments are subject to partial analysis of the input. A good enough analysis can categorize a sentence as grammatical or ungrammatical even though a complete analysis was never achieved.13 Thesis VII: A complete linguistic analysis may not be achieved when a partial analysis is good enough. The good-enough strategy in processing should be seen as a strategy adopted to minimize the consumption of processing resources (e.g. working memory). Christianson et al. (2006) found older adults (age 73-75) to adopt this strategy (maintaining the incorrect thematic assignments) more often than younger adults (age 20-22). They attributed the difference to older adults having difficulty restructuring a garden-path sentence thus relying on the goodenough interpretation. The good-enough strategy can be taken as a strategy that copes with the limited processing power (i.e. limited working-memory capacity) at the level of MG. The results of this strategy should therefore be seen as data at the level of MG, not UGPE. 2.5. The brain is finite; language is infinite. Language at the level of UGPE is purely about the structural manipulation of forms. As language happens in the human brain, it is subject to the processing power enabled and constrained by the brain. When we reflect on our own language such as making introspective grammaticality judgments, we are inevitably operating at the level of MG, trying to speculate on what the UGPE is composed of. Our judgments are constrained by how well the brain is capable of processing the linguistic inputs. Such a limitation imposed by the brain resulted in the effect of processability on grammaticality judgments (discussed in 2.2), the templateinduced garden-path effects (discussed in 2.3), and the need to resort to good-enough shallow 13

For examples of good-enough shallow processing of Chinese relative clauses, see Chapter 5 of Lin (2006) and Lin & Bever (2007b).

15

processing (discussed in 2.4). In this section, we consider the ultimate origin of this limitation—our limited working-memory capacity, which is the primary constraint on MG. The limited working memory constrains the amount of linguistic input one can deal with at a time. Two main types of costs in sentence processing consume working memory—the cost for storing the linguistic inputs and cost for integrating linguistic materials (Caplan & Waters, 1999; Gibson, 1998).14 Research on working memory debates on whether syntactic and other cognitive tasks consume the same or different pools of working memory (same: Fedorenko et al., 2006, 2007; Just & Carpenter, 1992; different: Caplan & Waters, 1999). The consensus, nevertheless, is that people of different capacities of working memory perform differently in linguistic tasks. Thesis VIII: Different working-memory capacities induce different sentence processing strategies in the MG. Individual differences in working memory capacity affect linguistic performances. When participants are burdened in working memory, the effect of processing difficulty becomes excessively detrimental. This contrast can be observed when the same subjects read materials of different difficulties and when the same materials are read by subjects with different working memory capacities. Linguistic materials that are beyond one’s processing capacity are subject to ungrammaticality. Participants that do not have sufficient working memory for grammaticality judgments on complex sentences also do not produce reliable judgments. Just and Carpenter (1992) contrasted self-paced reading performances of simple subject and object relative clauses (24-25) among participants of high, mid, and low working memory spans. (24) The reporter that attacked the | senator | admitted | the error. (subject relative clause) (25) The reporter that the senator | attacked | admitted | the error. (object relative clause) They found that reading time differences between participants of different working memory spans only appeared on the critical regions (regions 2 & 3) of the object relative clauses. While participants with high working memory spans performed only a bit worse when they read object relative clauses, those with mid and low working memory spans showed greater difficulty in reading object relatives. This showed that the limitation of working memory becomes a crucial factor when the materials are difficult to process. Just and Carpenter also compared the processing of a garden-path sentence (26) with that of an unambiguous sentence (27) to get the degree of garden pathing among readers of different working memory spans. To their surprise, participants with high working memory spans produced longer reading times toward the end of the garden-path sentences. They suggested that readers with higher memory capacities attempted to maintain multiple parses for a longer time and thus were burdened toward the end of the sentence. Readers with low working memory capacities, in contrast, quickly settle on the preferred interpretation since they do not have sufficient working memory to maintain multiple parses. (26) The soldiers | warned about the dangers | before the midnight | raid. (27) The soldiers | spoke about the dangers | before the midnight | raid.

14

Recent computational theories of sentence processing (Lewis, 1996; Lewis et al., 2006) suggest focusing on the cost of retrieval (i.e. integration) on sentence comprehension.

16

This finding suggests that readers of different working memory capacities adopt different reading strategies to cope with difficult reading materials.15 Their proposal is reminiscent of the strategy of good-enough processing by Ferreria and colleagues (Ferreira, 2003; Ferreira & Patson, 2006) discussed in 2.4. In Christianson et al. (2006), older adults (with limited working memory spans) adopted the good-enough interpretations more often than younger adults. Differences in working memory motivate different strategies adopted to deal with linguistic materials. These different strategies also underlie the grammaticality judgments, which shed light on the MG, not the UGPE. 3. Conclusion Decisions about grammar (e.g. grammaticality judgments and parsing) are jointly influenced by factors at various levels. It is proposed that three layers of explanations in grammar should be distinguished—universal grammar par excellence (UGPE), mental grammar (MG), and grammar in use (GU). To explore the essential properties of the human language at the level of UGPE, one needs to tease out factors in both MG and GU. GU has traditionally been distinguished from UGPE in the competence-performance distinction; the effects of MG on UGPE, however, have not been sufficiently explored.16 The following theses about MG have been discussed in this chapter: Thesis I Thesis II Thesis III Thesis IV Thesis V Thesis VI Thesis VII Thesis VIII

Grammar resides in the head. Grammaticality judgments are introspections on the mental grammar. A parse that is grammatical can be suppressed by a competing parse that is more dominant. Processability enhances grammaticality. Garden path reduces grammaticality. Thematic misassignments are costly to recover from. A complete linguistic analysis may not be achieved when a partial analysis is good enough. Different working-memory capacities induce different sentence processing strategies in the MG.

With similar knowledge about how the mind works accumulated in the cognitive sciences (e.g. Halford et al., 2007; Hauser et al., 2002), we are at a position to refine the distinctions between these different layers of contribution to linguistic judgments. The level of grammar on which any explanation is based should be critically evaluated and specified. The MG is like the tip of an iceberg through which we examine the UGPE—the larger ice body that is under the water. As an iceberg is placed into water, only a small part (about onetenth) can be seen above water. To determine what is under the water, it is important to understand how representative the tip is for the whole iceberg, and the factors that enabled the floating. With language, we can only do so by understanding the processing power of the brain that enables and constrains language.

15

For the effect of working memory spans on antecedent priming in cross-modal lexical decisions, see Nakano et al. (2002). 16 There have, nevertheless, been lines of research that attempted to derive grammatical constraints such as the island constraint and superiority effects from processing. See Kluender (1998) and Phillips (2006), and Hawkins (1994, 1999, 2004).

17

References ALEXOPOULOU, DORA and KELLER, FRANK. 2007. Locality, cyclicity and resumption: At the interface between grammar and the human sentence parser. Language, 83.110-60. AOUN, JOSEPH and LI, YEN-HUI AUDREY. 2003. Essays on the Representational and Derivational Nature of Grammar: The Diversity of Wh-Constructions. Cambridge, MA: MIT Press. ASLIN, R., SAFFRAN, J. and NEWPORT, E. 1998. Computation of conditional probability statistics by 8-month-olds infants. Psychological Science, 9.321-24. BEVER, THOMAS G. 1970. Cognitive Basis for Linguistic Structures. Cognition and the Development of Language, ed. by John R. Hayes, 279-362. New York: Wiley. BOCK, J. K. 1986. Syntactic persistence in language production. Cognitive Psychology, 8.355-87. BRANIGAN, H. P., PICKERING, M. J. and CLELAND, A. A. 2000. Syntactic coordination in dialogue. Cognition, 75.B13-B25. CAPLAN, D. and WATERS, G. S. 1999. Verbal working memory and sentence comprehension. Behavioral and Brain Sciences, 22. 77-94. CHOMSKY, NOAM. 1957. Syntactic Structures. The Hague: Mouton. —. 2005. On phases. Cambridge, MA —. 2006. Approaching UG from below. Cambridge. MA CHRISTIANSON, K., HOLLINGWORTH, A., HALLIWELL, J. and FERREIRA, F. 2001. Thematic roles assigned along the garden path linger. Cognitive Psychology, 42.368-407. CHRISTIANSON, K., WILLIAMS, C., ZACKS, R. and FERREIRA, F. 2006. Younger and older adults’ good enough interpretations of garden-path sentences. Discourse Processes, 42.205-38. CONNINE, C. M., MULLENNIX, J. W., SHERNOFF, E. and YELEN, J. 1990. Word familiarity and frequency in visual and auditory word recognition Journal of Experimental Psychology: Learning, Memory, and Cognition, 16.1084-96. COWART, WAYNE. 1997. Experimental Syntax: Applying Objective Methods to Sentence Judgments. Thousand Oaks, CA: Sage Publications. CRAIN, STEPHEN and STEEDMAN, MARK. 1985. On Not Being Led up the Garden Path: The Use of Context by the Psychological Parser. Natural Language Parsing: Psychological, Computational, and Theoretical Perspectives, ed. by D. R. Dowty, L. Karttunen and A. M. Zwicky, 320-58. New York: Cambridge University Press. CULICOVER, PETER W. and JACKENDOFF, RAY. 2005. Simpler Syntax. New York: Oxford University Press. —. 2006. The simpler syntax hypothesis. Trends in Cognitive Sciences, 10.413-18. DE SAUSSURE,

F. 1915 [1974]. Course in General Linguistics. London: Fontana.

DURANTI, ALESSANDRO. 1994. From Grammar to Politics: Linguistic Anthropology in a Western Samoan Village. Berkeley and Los Angeles: University of California Press.

18

EDELMAN, S. and CHRISTIANSEN, M. H. 2003. How seriously should we take Minimalist syntax? TICS, 7, 60-61. Trends in Cognitive Sciences, 7.60-61. FEDORENKO, EVELINA, GIBSON, EDWARD and RHODE, DOUGLAS. 2006. The nature of working memory capacity in sentence comprehension: Evidence against domainspecific working memory resources Journal of Memory and Language, 54.541-53. —. 2007. The nature of working memory in linguistic, arithmetic and spatial integration processes. Journal of Memory and Language, 56.246-69. FERREIRA, F. and PATSON, N. 2007. The good enough approach to language comprehension. Language and Linguistics Compass, 1.71-83. FERREIRA, FERNANDA. 2003. The misinterpretation of noncanonical sentences. Cognitive Psychology, 47.164-203. FEYERABEND, P. K. 1975. Against method: Outline of an Anarchistic Theory of Knowledge. London: New Left Books. FORSTER, KEN I. F. and CHAMBERS, S. M. 1973. Lexical access and naming time. Journal of Verbal Learning and Verbal Behavior, 12.627-35. FRAZIER, LYN. 1987. Sentence Processing: A Tutorial Review. Attention and Performance: Vol. 12 The Psychology of Reading, ed. by M. Coltheart, 559-86. Hove, England: Erlbaum. FRAZIER, LYN and CLIFTON, CHARLES JR. 1989. Successive cyclicity in the grammar and the parser. Language and Cognitive Processes, 4.93-126. GIBSON, EDWARD. 1998. Linguistic Complexity: Locality of Syntactic Dependencies. Cognition, 68.1-76. GOLDBERG, ADELE E. 2006. Constructions at Work: The Nature of Generalization in Language. Oxford: Oxford University Press. GÓMEZ, REBECCA. 2007. Statistical learning in infant language development. The Oxford Handbook of Psycholinguistics, ed. by M. Gareth Gaskell. Oxford: Oxford University Press. GREENBERG, JOSEPH H. 1963. Some universals of grammar with particular reference to the order of meaningful elements. Universals of Language, ed. by Joseph H. Greenberg. Cambridge, MA: The MIT Press. HALFORD, GRAEME S., COWAN, NELSON and ANDREWS, GLENDA. 2007. Separating cognitive capacity from knowledge: A new hypothesis Trends in Cognitive Sciences, 11.236-42. HANSON, N. R. 1958. Patterns of Discovery. Cambridge, UK: Cambridge University Press. HAUSER, M. D., NEWPORT, E. L. and ASLIN, R. N. 2001. Segmentation of the speech stream in a nonhuman primate: Statistical learning in cotton-top tamarins. Cognition, 78.B53B64. HAUSER, M. D., CHOMSKY, NOAM and FITCH, W. T. 2002. The faculty of language: What is it, who has it, and how did it evolve? . Science, 298.1569-79. HAWKINS, JOHN A. 1994. A Performance Theory of Order and Constituency. Cambridge, UK: Cambridge University Press. —. 1999. Processing Complexity and Filler-Gap Dependencies Across Grammars. Language, 75.244-85.

19

—. 2004. Efficiency and Complexity in Grammars. Oxford: Oxford University Press. HER, ONE-SOON and WAN, I-PING. 2007. Corpus and the nature of grammar revisited Concentric: Studies in Linguistics, 33.67-111. JUST, M. A. and CARPENTER, P. A. . 1992. A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 98.122-49. KING, JONATHAN and JUST, MARCEL ADAM. 1991. Individual differences in syntactic processing: The role of working memory. Journal of Memory and Language, 30.580602. KLUENDER, ROBERT. 1998. On the distinction between strong and weak islands: A processing perspective. Syntax and Semantics, 29.241-79. LANGENDOEN, D. TERENCE. 1972. The problem of grammaticality. Peabody Journal of Education, 50.20-23. LEDOUX, KERRY, TRAXLER, MATTHEW J. and SWAAB, TAMARA Y. 2007. Syntactic Priming in Comprehension: Evidence from Event-Related Potentials. Psychological Science, 18.135-43. LEWIS, R. L. 1996. Interference in short-term memory: The magical number two (or three) in sentence processing. Journal of Psycholinguistic Research, 25.93-115. LEWIS, R. L., VASISHTH, SHRAVAN and VANDYKE, JULIE A. 2006. Computational principles of working memory in sentence comprehension. Trends in Cognitive Sciences, 10.447-54. LIN, CHIEN-JER CHARLES. 2006. Grammar and Parsing: A Typological Investigation of Relative-Clause Processing, Joint Ph.D. Program in Anthropology and Linguistics, The University of Arizona. LIN, CHIEN-JER CHARLES and BEVER, THOMAS G. 2007a. Processing head-final relative clauses without garden paths. The International Conference on Processing Head-Final Structures. Rochester Institute of Technology, Rochester, NY —. 2007b. Syntactic anomaly beyond semantic rescue. The First Conference on Language, Discourse and Cognition (CLDC-1). National Taiwan University, Taipei LOEBELL, H. and BOCK, J. K. 2003. Structural priming across languages. Linguistics, 41.791– 824. MACDONALD, M. C., PEARLMUTTER, N. J. and SEIDENBERG, M. S. 1994. The Lexical Nature of Syntactic Ambiguity Resolution. Psychological Review, 101.676-703. MCKEE, CECILE and MCDANIEL, DANA. 2001. Resumptive pronouns in English relative clauses. Language Acquisition, 9.113-56. MCRAE, KEN, SPIVEY-KNOWLTON, M. J. and TANENHAUS, MICHAEL K. 1998. Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension Journal of Memory and Language, 38.283-312. NAKANO, YOKO, FELSER, CLAUDIA and CLAHSEN, HARALD. 2002. Antecedent Priming at Trace Positions in Japanese Long-Distance Scrambling. Journal of Psycholinguistic Research, 31.531-71. NEWMEYER, FREDERICK J. 2005. Possible and Probable Languages: A Generative Perspective on Linguistic Typology. Oxford: Oxford University Press.

20

NING, LI-HSIN and LIN, CHIEN-JER C. 2007. What are resumptives good for? The International Conference on Processing Head-Final Structures. Rochester Institute of Technology, Rochester, NY —. 2008. Resumptives in Mandarin: Syntactic versus processing accounts. The Syntax Session of the 18th International Congress of Linguistics (ICL18). Korea University, Seoul, Korea NUSBAUM, H. C. and DEDINA, M. 1985. The effects of word frequency and subjective familiarity on visual lexical decisions. Research on Speech Perception (Progress Report No. 11). Bloomington: Indiana University PETERSON, MARY A. and KIM, J. H. 2001. On what is bound in figures and grounds. Visual Cognition, 8.329-48. PETERSON, MARY A. and SKOW-GRANT, E. 2003. Memory and learning in figure-ground perception (Volume: Cognitive Vision). Psychology of Learning and Motivation, 42.1-34. PHILLIPS, C. 2006. The real-time status of island phenomena. Language, 82.795-823. PHILLIPS, C. and LASNIK, H. 2003. Linguistics and empirical evidence. Trends in Cognitive Sciences, 7.61-62. PICKERING, M. J. and BRANIGAN, H. P. 1998. The representation of verbs: Evidence from syntactic priming in language production. Journal of Memory and Language, 39.63351. POPPER, K. R. 1968. The Logic of Scientific Discovery. London: Hutchinson. RUBENSTEIN, H., GARFIELD, L. and MILLIKAN, J. A. 1970. Homographic entries in the internal lexicon. Journal of Verbal Learning and Verbal Behavior, 9.487-94. SAFFRAN, J., ASLIN, R. and NEWPORT, E. 1996. Statistical learning by eight-month-old infants. Science, 274.1926-28. SCHÜTZE, CARSON T. 1996. The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology. Chicago and London: The University of Chicago Press. TOWNSEND, DAVID J. and BEVER, THOMAS G. 2001. Sentence Comprehension: The Integration of Habits and Rules. Cambridge, MA: MIT Press. TRUESWELL, JOHN C., TANENHAUS, MICHAEL K. and GARNSEY, SUSAN M. 1994. Semantic Influences on Parsing: Use of Thematic Role information in Syntactic Ambiguity Resolution. Journal of Memory and Language, 33.285-318. YANG, CHARLES. 2004. Universal grammar, statistics or both? Trends in Cognitive Sciences, 8.

21

1 Distinguishing Linguistic and Processing ...

An important issue at the center of linguistic inquiries is the distinction between what belongs to ... methodologies and core data each camp adopts. Formalists ...

370KB Sizes 0 Downloads 200 Views

Recommend Documents

1 Distinguishing Linguistic and Processing ...
reasoning, decision-making, and statistical learning all operate at this level. ..... following, I take resumption in head-final relative clauses as an illustration.

Repetition and Event-related Potentials: Distinguishing ...
tion phase. We expected full recovery if ERP amplitude changes reflect ... Data were acquired referenced to CZ, and the average reference was calculated off-line. Vertical and horizontal eye move- ments were recorded using Sensormedics Ag/AgCl min- i

DISTINGUISHING HERMITIAN CUSP FORMS OF ...
Proof. Let r = p1p2.....pm. We prove by induction on m. For m = 1 the result is true from. Theorem 8. Assume the result for m−1. Let r1 = r/pm and let g = Ur1 f. Then g ∈ Sk(Nr1,χ) and g is an eigenfunction for Tpm with the eigenvalue λf (pm) (

INMATE SELF-INJURIOUS BEHAVIORS Distinguishing ...
and Information Management Unit, for its support and assistance in this study. ... Downloaded from ... of a significant medical problem at statistically significant levels. ... history, measured via mental health diagnostic and treatment records, ...

Quantile approach for distinguishing agglomeration ...
Mar 18, 2017 - (2012, “The productivity advantages of large cities: Distinguishing agglomeration from firm selection,” ... research conducted under the project “Data Management” at the RIETI. The views .... distribution analysis. Conversely .

Distinguishing paintings from photographs
Aug 18, 2005 - Department of Computer Science, Indiana University, Bloomington, ... erally as determining the degree of perceptual photorealism of an image.

Linguistic Intuitions
interesting questions: What are linguistic intuitions – for example, what kind of attitude or ... linguistic, or language-like item, sounds good (sometimes as rated on a numerical scale), could or would be used ...... Email: [email protected].

Linguistic Understanding and Belief
claim that belief about an expression's meaning is nomologically neces- .... them say things like: “Let's have mud for lunch” and “The rain made big puddles of pudding”' ... hypotheses within it, comprises much specifically linguistic data as

Information Processing and Retrieval 1.pdf
How is the Universal Decimal Classification (UDC). scheme different from Dewey Decimal Classification. (DDC) scheme ? Describe the salient features,. limitations and problems of DDC and UDC. What are the elements of information retrieval. thesaurus ?

pdf-1841\rheology-and-processing-of-polymeric-materials-volume-1 ...
... of the apps below to open or edit this item. pdf-1841\rheology-and-processing-of-polymeric-materials-volume-1-polymer-rheology-by-chang-dae-han.pdf.

Revisited Linguistic Intuitions
Jun 10, 2011 - 843—all unadorned page references are to. Devitt [2010]) ..... LITTLE—at least one course in cognitive science, but no syntax. NONE—none of ...

Linguistic Features of Writing Quality
Psychology/Institute for Intelligent Systems ... Phone: (901) 678-3803. E-mail: ...... Washington, DC: U.S. Office of ... Psychological Review, 99, 122-149. Kellogg ...

Distinguishing Non-Conceptual Content from Non ...
desk) is too fine-grained to be captured by the conceptual resources of a mental symbolic system. (Peacocke .... King, Jeffrey C. (2011), Structured Propositions.

wildlife population Distinguishing epidemic waves from ...
Feb 25, 2009 - Based on the limited epidemiological data available from this period, ... Our analysis of the model in light of the 1994 outbreak data strongly ...

Linguistic Features of Writing Quality
Writing well is a significant challenge for students and of critical importance for success ... the best predictors of success in course work during their freshmen year of college (Geiser &. Studley ... computationally analyzing essays written by fre

Knowing versus Naming: Similarity and the Linguistic ...
Speakers of the three languages show substantially different patterns of ... but they see the similarities among the objects in much the same way. The ..... why do people call the objects in the upper panel of Fig. 3 “juice box” instead of “bot

Type theory and language From perception to linguistic ...
tense and aspect in natural language we will probably want to add time ...... 〈Pred, ArgIndices, Arity〉 is a (polymorphic) predicate signature ..... Fernando, Tim (2006) Situations as Strings, Electronic Notes in Theoretical Computer Science,. Vo

Clausal Comparatives and Cross-linguistic Variation ... | Google Sites
Cornell University, McGill University, MIT, and University of Tuebingen, ... genuine clausal comparatives with degree abstraction structures do exist in Japanese.

Natural, Metaphoric, and Linguistic Auditory Direction Signals Have ...
May 20, 2009 - Direction Signals Have Distinct Influences on Visual. Motion Processing ..... The data were analyzed with statistic parametric mapping [using.