FOCUS, NUMERALS, AND NEGATIVE POLARITY ITEMS IN JAPANESE* Kimiko Nakanishi University of Calgary [email protected]

1. Negative Polarity Items in Japanese Negative polarity items (NPIs) are words and expressions that appear only in negative sentences and some other related environments (see Ladusaw 1979, among others). For instance, English anyone can appear in negative, but not positive, sentences, as in (1). (1)

a. Alan didn’t see anyone. b.* Alan saw anyone.

In Japanese, the same distributional restriction can be observed with an indeterminate pronoun or an expression of a minimal amount that is followed by the particle -mo, as in (2). I turn to the question of what -mo means in section 4.1. In the following, I set aside the expression with an indeterminate (but see section 4.2.4 for some discussion), and focus on the expression with one. Since the expression with one in (2) behaves like NPIs, I assume here that it is an NPI and call it one NPI in the following.1 (2)

a. Alan-wa {dare-mo / hito-ri-mo} mi-nakat-ta. Alan-TOP {who-MO / one-CL-MO} see-NEG-PAST ‘Alan didn’t see anyone.’ b.* Alan-wa {dare-mo / hito-ri-mo} mi-ta. Alan-TOP {who-MO / one-CL-MO} see-PAST ‘Alan saw anyone.’

Numerals in Japanese must be followed by a classifier, a morpheme that indicates the semantic class of a quantified element in terms of shape, size, animacy, and so on (Downing 1996 for details). For instance, the classifier -nin in (2) is used to count human beings (e.g., boys, girls, men, women). Although there is no overt expression for human in (2), one-CL + -mo may occur with an overt NP, as in (3) and (4).

*

1

This paper was presented at the Workshop on Focus held at the National Institute of Informatics in Tokyo, November 2005. I would like to thank the audiences and the organizers Makoto Kanazawa and Chris Tancredi. I am very grateful to an anonymous reviewer for valuable comments on an earlier draft of this paper, and to David Beaver, Daniel Büring, Makoto Kanazawa, Maribel Romero, and Chris Tancredi for comments at different stages of this work. Strictly speaking, the distribution of English any and that of the Japanese expressions in (2) are not the same, and for this reason one may argue that the expressions at issue should not be considered as NPIs. See section 5 for discussion. Furthermore, I show in section 6 that the Japanese expressions at issue correspond to so-called strong NPIs (e.g., lift a finger, stressed ANY), and not to weak NPIs such as unstressed any in (1).

(3)

(4)

a. Alan-wa otokonoko-o hito-ri-mo Alan-TOP boy-ACC one-CL-MO ‘Alan didn’t see any boy(s).’ b.* Alan-wa otokonoko-o hito-ri-mo Alan-TOP boy-ACC one-CL-MO ‘Alan saw any boy(s).’ a. Alan-wa hito-ri-no otokonoko-mo Alan-TOP one-CL-GEN boy-MO ‘Alan didn’t see any boy(s).’ b.* Alan-wa hito-ri-no otokonoko-mo Alan-TOP one-CL-GEN boy-MO ‘Alan saw any boy(s).’

mi-nakat-ta.2 see-NEG-PAST mi-ta. see-PAST mi-nakat-ta. see-NEG-PAST mi-ta.3 see-PAST

The one NPIs in (3a) and (4a) are both translated as ‘any boy(s)’, and thus both (3a) and (4a) assert that Alan saw no boy(s). In this paper, I examine semantic differences between the two sentences (as well as another example of one NPI to be introduced shortly) that have not been discussed in the literature. For instance, while (3a) is compatible with the context where Alan saw some girls (although he saw no boys), (4a) is infelicitous in such a context. I account for the semantic differences between the two by arguing that -mo is a focus particle analogous to English even (or Hindi bhii under Lahiri’s (1998) analysis) and that -mo may associate with different focus sites. More specifically, -mo in (3a) associates with the cardinality predicate one, whereas -mo in (4a) associates with the cardinality predicate one plus the NP boy(s). The structure of the paper is as follows. In section 2, I show semantic differences between the two one NPIs in (3a) and (4a) in more detail, and further introduce another one NPI that semantically behaves like the one NPI in (4a). Section 3 presents a brief summary of the semantics of even and of Lahiri’s (1998) compositional analysis of Hindi NPIs. In section 4, I account for the semantic differences of Japanese one NPIs by extending Lahiri’s analysis. I first argue that one NPIs consists of the cardinal predicate one and the focus particle even. Then I account for the distribution and semantic properties of one NPIs by examining what even associates with. In section 5, I examine one NPIs that occur in NPI licensing contexts other than negative sentences (e.g., the restrictor of every). Section 6 is the conclusion and discussions of further issues.

2. Japanese Negative Polarity Items with One Numerals in Japanese are known to be able to appear in various locations, as in (5). For an east of exposition, I refer to the three configurations as Type I, Type II, and Type III, respectively. The

2

3

Nouns in Japanese lack an obligatory grammatical marking of definiteness and of plurality, and bare nouns can be used freely as arguments, as in (i). (i) Otokonoko-ga inu-o mi-ta. boy-NOM dog-ACC see-PAST ‘A boy/boys/the boy(s) saw a dog/dogs/the dog(s).’ Example (4b) is judged to be acceptable if -mo is interpreted as ‘also’, which yields the interpretation that Alan also saw one boy. See section 4.2.2 for this point.

2

structures of the three are schematized in (6). Regardless of the structural differences, the three sentences are truth-conditionally equivalent: they all assert that Alan saw one dog at the park.4 (5)

(6)

a. Alan-wa kooen-de inu-o ip-piki Alan-TOP park-at dog-ACC one-CL ‘Alan saw one dog at the park.’ b. Alan-wa kooen-de ip-piki-no inu-o one-CL-GEN dog-ACC Alan-TOP park-at c. Alan-wa kooen-de inu ip-piki-o Alan-TOP park-at dog one-CL-ACC Type I numeral: Type II numeral: Type III numeral:

mi-ta. see-PAST mi-ta. see-PAST mi-ta. see-PAST

NP-CASE one-CL one-CL-GEN NP-CASE NP one-CL-CASE

When the numerals in (5) are placed in negative contexts, as in (7), they simply yield the reading that Alan didn’t see one dog at the park. (7)

a. Alan-wa kooen-de inu-o ip-piki Alan-TOP park-at dog-ACC one-CL ‘Alan didn’t see one dog at the park.’ b. Alan-wa kooen-de ip-piki-no inu-o Alan-TOP park-at one-CL-GEN dog-ACC c. Alan-wa kooen-de inu ip-piki-o dog one-CL-ACC Alan-TOP park-at

mi-nakat-ta. see-NEG-PAST mi-nakat-ta. see-NEG-PAST mi-nakat-ta. see-NEG-PAST

In section 1, we have seen that the numerals in (5a) and (5b) (i.e., Type I and II) can occur with the particle -mo and serve as an NPI, as in (8) and (9). In Type I one NPI (8), -mo must be attached to the numeral, while in Type II one NPI (9), -mo must be used instead of the case marker (in this case, the accusative marker -o). (8)

(9)

4

Type I:

Type II:

a. Alan-wa kooen-de inu-o ip-piki-mo Alan-TOP park-at dog-ACC one-CL-MO ‘Alan didn’t see any dog(s) at the park.’ b.* Alan-wa kooen-de inu-o ip-piki-mo dog-ACC one-CL-MO Alan-TOP park-at

mi-nakat-ta. see-NEG-PAST

a. Alan-wa kooen-de ip-piki-no inu-mo Alan-TOP park-at one-CL-GEN dog-MO ‘Alan didn’t see any dog(s) at the park.’ b.* Alan-wa kooen-de ip-piki-no inu-mo Alan-TOP park-at one-CL-GEN dog-MO

mi-nakat-ta. see-NEG-PAST

mi-ta. see-PAST

mi-ta. see-PAST

(cf. (3))

(cf. (4))

It is not entirely true that the three sentences in (5) have exactly the same meaning. For instance, a partitive interpretation is salient in (5a), but not in the other two (Inoue 1978, Fujita 1994, Hamano 1997). I put this issue aside because the main focus of the paper is not numerals, but NPIs consisting of one and the particle -mo.

3

Type III is an interesting case in that it can yield an NPI interpretation, just as Type I and II can, but it resists the overt presence of -mo. Examples (10) show this point. (10) Type III: a. Alan-wa kooen-de inu ip-piki(*-mo) Alan-TOP park-at dog one-CL(-MO) ‘Alan didn’t see any dog(s) at the park.’ b.* Alan-wa kooen-de inu ip-piki(-mo) dog one-CL(-MO) Alan-TOP park-at

mi-nakat-ta. see-NEG-PAST mi-ta. see-PAST

The three types of one NPIs presented in (8), (9), and (10) are schematized in (11). (11) Type I one NPI: Type II one NPI: Type III one NPI:

NP-CASE one-CL-MO one-CL-GEN NP-MO NP one-CL(*-MO)

In the examples presented above, the three types of one NPIs are translated in the same way. However, as briefly mentioned in section 1, there are meaning differences among the three. Consider first the following scenario: Alan saw no dogs at the park, although he saw other animals. Type I one NPI in (8a) is felicitous in this context, but Type II in (9a) and Type III in (10a) are odd. Type II / III one NPIs are felicitous when Alan saw no dogs as well as no other animals/people. Type I is also felicitous in this context, although, as shown in the first context, it is felicitous even when Alan saw something other than dogs. Note that for Type II and III to be felicitous in the second scenario, it is crucial that the park is popular for dogs; if you were to encounter anything/anyone at the park, that would be a dog. This observation is corroborated by the contrast between (12) and (13). There is a general understanding that bread is a typical food that we often eat, whereas steaks are not. Given this, Type II and III one NPIs are compatible with bread, but not with steaks. When Type II / III one NPIs are used with bread, as in (12), we obtain the reading that Alan ate nothing, not just no bread. In contrast, Type I one NPI is compatible both with bread and steaks, and yields the interpretation that Alan didn’t eat any bread/steaks, although he might have eaten something else. (12) Type I:

Alan-wa pan-o iti-mai-mo tab-enakat-ta. Alan-TOP bread-ACC one-CL-MO eat-NEG-PAST ‘(lit.) Alan didn’t eat one slice of bread.’ = Alan ate no bread Type II: Alan-wa iti-mai-no pan-mo tab-enakat-ta. eat-NEG-PAST Alan-TOP one-CL-GEN bread-MO ‘(lit.) Alan didn’t eat one slice of bread.’ = Alan ate nothing Type III: Alan-wa pan iti-mai(*-mo) tab-enakat-ta. Alan-TOP bread one-CL(-MO) eat-NEG-PAST ‘(lit.) Alan didn’t eat one slice of bread.’ = Alan ate nothing

(13) Type I:

Alan-wa suteeki-o iti-mai-mo tab-enakat-ta. Alan-TOP steak-ACC one-CL-MO eat-NEG-PAST ‘(lit.) Alan didn’t eat one steak.’ = Alan ate no steak Type II: ?? Alan-wa iti-mai-no suteeki-mo tab-enakat-ta. Alan-TOP one-CL-GEN steak-MO eat-NEG-PAST

4

Type III:??Alan-wa suteeki Alan-TOP steak

iti-mai(*-mo) tab-enakat-ta. one-CL(-MO) eat-NEG-PAST

Two observations were made so far regarding the three types of one NPIs in negative contexts. First, for Type II / III one NPIs to be felicitous, they must occur with an NP that is the most plausible or typical in the relevant context (e.g., dogs as the most plausible animal to be seen at the park, bread as the most typical food to be eaten). Type I one NPI is not subject to such a restriction. Second, Type II / III one NPIs yield the ‘not … anything/anybody’ reading when they are felicitously used. In contrast, Type I one NPI yields the ‘not … any NP(s)’ reading, just as English NPI any NP(s) does, regardless of the semantic content of the NP. The interpretations and schemas of the three types of one NPIs are summarized in (14). (14) Type I one NPI: NP-CASE one-CL-MO … Type II one NPI: one-CL-GEN NP-MO … Type III one NPI: NP one-CL(*-MO) …

NEG NEG NEG

‘not … any NP(s)’ ‘not … anything/anybody’ ‘not … anything/anybody’

In the following, I argue that the two semantic differences between Type I and Type II / III are explained by treating -mo as a focus particle corresponding to English even and by assuming that -mo associates with one in Type I (à la Lahiri 1998) or with one NP in Type II / III. Before presenting the analysis, I first introduce the semantics of even and Lahiri’s (1998) analysis of Hindi NPIs that makes crucial use of even.

3. Even and Negative Polarity Items Section 3.1 summarizes the semantics of the focus particle even, and section 3.2 presents Lahiri’s (1998) analysis of Hindi NPIs that are composed of one and bhii ‘also, even’.

3.1.

The Semantics of Even

It has been claimed that particles such as even and only contribute to the meaning of the sentence and that their meaning contribution is affected by the location of focus. This phenomenon, known as the association with focus (Rooth 1985), can be observed in (15). [ ]F indicates the element with focal accent. (15) a. Alan even introduced [Bill]F to Colin. b. Alan even introduced Bill to [Colin]F. Although (15a) and (15b) have the same truth condition, they introduce different presuppositions (or conventional implicatures): the former presupposes that Bill is an unlikely person for Alan to introduce to Colin, while the latter presupposes that Colin is an unlikely person for Alan to introduce Bill to. The fact that the meaning of English even is sensitive to the placement of focal stress suggests that focus needs to be expressed at the level of semantic representation. In the framework of Rooth’s (1985, 1992) alternative semantics, the role of focus is to identify the domain of quantification C for even. Assuming that even is a sentential operator, even in (15) combines with C and the proposition p ‘that Alan introduced Bill to Colin’, as in (16). In the following, I refer to the proposition p that even combines with as a target proposition. The value for C is a subset of

5

the set of propositions that obtain by replacing the focused element with the elements of the same type. (15a) and (15b) differ in the location of focus, and thus their values for C are different, as shown in (17). (16) a. LF of (15a): b. LF of (15b):

even C [ Alan introduced [Bill]F to Colin ] even C [ Alan introduced Bill to [Colin]F ]

(17) [[even]] w (C)(p), where p = w. introduce(a,b,c,w) and a. (for (15a)) C  {q: x[q = w. introduce(a,x,c,w)]} E.g. C = {that Alan introduced Bill to Colin, that Alan introduced David to Colin, that Alan introduced Eric to Colin, ...} b. (for (15b)) C  {q: x[q = w. introduce(a,b,x,w)]} E.g. C = {that Alan introduced Bill to Colin, that Alan introduced Bill to David, that Alan introduced Bill to Eric, ...} Turning now to the semantics of even, it has been argued that even has no truth-conditional contribution, but it introduces the scalar presupposition (ScalarP) that p is the least likely proposition among the alternatives in C, as in (18) (Karttunen and Peters 1979).5,6 The sentences in (15) both assert that Alan introduced Bill to Colin and presuppose that ‘that Alan introduced Bill to Colin’ is the least likely proposition among the alternatives in C. Since (15a) and (15b) have different values for C, as shown in (17), they end up having different ScalarPs: (15a) presupposes that Bill is the least likely person for Alan to introduce to Colin, and (15b) presupposes that Colin is the least likely person for Alan to introduce Bill to. (18) ScalarP:

qC [qp  q >likely p]

Note that it is controversial whether the target proposition is the least likely among the alternatives (Fauconnier 1975a, b, Karttunen and Peters 1979) or less likely than most (or some) alternative propositions (Kay 1990, Francescotti 1995), and whether the scale involved in the ScalarP of even should be defined in terms of likelihood (Fauconnier 1975a, b, Kay 1990, Herburger 2000, Giannakidou 2007). I come back to these questions in section 4.4. Meanwhile, I simply assume that the target proposition must be the least likely one and that the scale is based on likelihood. When even appears in negative contexts, we obtain a different scalar presupposition. For instance, (19) presupposes that Bill is the most likely person for Alan to introduce to Colin. Karttunen and Peters (1979) argue that this is because even takes scope over negation, yielding the LF representation in (20a) (see also Wilkinson 1996, Guerzoni 2003, Nakanishi 2008b). Then the target proposition includes negation, as in (20b). Even in (19) then introduces the ScalarP in (20c) that the target proposition ‘that Alan didn’t introduce Bill to Colin’ is the least likely 5

6

Karttunen and Peters (1979) use the term conventional implicature rather than presupposition. However, they consistently use the former term to refer to implications that correspond to those referred to as presuppositions in the recent literature. Besides the ScalarP, even is claimed to introduce the existential presupposition in (i). For instance, in (15a), (i) yields the presupposition that there is some proposition other than ‘that Alan introduced Bill to Colin’ in C (see (17a)) that is true. Since this presupposition is not crucial to the main arguments of this paper, I leave it aside throughout the paper. (i) qC [qp  q(w)=1]

6

proposition among the alternatives in C, or equivalently, ‘that Alan introduced Bill to Colin’ is the most likely among the alternatives. In this way, when even takes scope over negation, the ScalarP gets reversed by negation. (19) Alan didn’t even introduce [Bill]F to Colin. (20) a. LF: even C [ not [ Alan introduced [Bill]F to Colin ] ] b. [[even]] w (C)(p), where p = w. introduce(a,b,c,w) and C  {q: x[q = w. introduce(a,x,c,w)]} E.g. C = {that Alan didn’t introduce Bill to Colin, that Alan didn’t introduce David to Colin, that Alan didn’t introduce Eric to Colin, ...} c. qC [q  w. introduce(a,b,c,w)  q >likely w. introduce(a,b,c,w)] Alternatively, Rooth (1985) argues that there are two lexical entries for even, the regular even that comes with the least-likely scalar presupposition in (18) and the NPI even that comes with the most-likely presupposition, where the likelihood scale is reversed (see also von Stechow 1991, Rullmann 1997, Herburger 2003, Giannakidou 2007). Lahiri’s (1998) analysis of Hindi NPIs, which we make substantial use in the following, is based on the scope theory rather than the lexical theory (see Lahiri 1998:85). In essence, Lahiri (1998) is ‘interested in deriving the distribution of Hindi NPIs from independent properties of bhii’ (p.85). Bhii corresponds to English even and appears in NPI expressions in Hindi (see (21) below). If the lexical theory were adopted, the properties of Hindi NPIs would be reduced to the properties of the NPI bhii. However, this would leave the following question unanswered: what are the properties of the NPI bhii or even? In this paper, following Lahiri (1998), I put aside the question of which theory is more appropriate for the semantics of even, and simply adopt Karttunen and Peters’ (1979) scope theory (see, for example, Wilkinson 1996, Rullmann 1997, Guerzoni 2003, Giannakidou 2007, Nakanishi 2008b for the comparison between the two theories).

3.2.

Lahiri’s (1998) Analysis on NPIs in Hindi

Lahiri (1998) argues that the semantics of even discussed above plays crucial role to account for the distribution of NPIs in Hindi. Hindi has an NPI that is composed of the predicate one and an emphatic particle bhii, as in (21).7 (21) a. maiN-ne ek bhii aadmii-ko I-ERG one EMP man ‘I didn’t see any man/men.’ b.* maiN-ne ek bhii aadmii-ko one EMP man I-ERG ‘I saw any man.’

nahiiN dekhaa not saw dekhaa saw (Lahiri 1998:61)

Lahiri shows that Hindi bhii means something analogous to English also, and behaves like English even in focus-affected contexts in that it introduces the ScalarP in (18). For example, when raam is not focused, (22) presupposes that there is someone besides Ram who came, which 7

Besides the NPI formed by ek ‘one’ and bhii, Lahiri (1998) also discusses the NPI formed by some (Hindi koii) and even (Hindi bhii). The NPI of this form roughly corresponds to the Japanese NPI formed by an indeterminate pronoun and -mo (see (2)) (Nakanishi 2007).

7

is the same presupposition as the one obtainable with English also. When raam is focused, an additional presupposition emerges, that is, Ram is the least likely person to come. (22) raam bhii Ram EMP

aayaa came

(Lahiri 1998:59)

Lahiri further claims that ‘since NPIs in Hindi are focused, bhii in these contexts simply corresponds to the English even’ (p.59). More specifically, bhii in NPIs associates with the cardinality predicate ek ‘one’, where alternatives are other cardinality predicates (two, three, etc.). Assuming that bhii is a sentential operator, the LF and the denotation of bhii in (21b) are given in (23a) and (23b), respectively. Bhii introduces the ScalarP in (23c) that the target proposition ‘that I saw one man’ is the least likely proposition among the alternatives in C. (23) a. LF : even C [ I saw [one]F ] b. [[bhii]] w (C)(p), where p = w. x[one(x)  see(I,x,w)] and C  {q: P[q = w. x[P(x)  see(I,x,w)]]} C = {that I saw one man, that I saw two men, … , that I saw n men} c. qC [q  w. x[one(x)  see(I,x,w)]  q >likely w. x[one(x)  see(I,x,w)]] Lahiri points out that the ScalarP in (23c) is inconsistent with the meaning of one: ‘one is the weakest possible predicate, and true of everything that exists’ (p.87). For instance, if ‘I saw five men’ is true, then ‘I saw one man’ must be true, and if I saw three men, I must have seen one man. In this way, as in (24a), the proposition with one is always entailed by the proposition with other cardinality predicates. Lahiri claims that from (24a) it follows that the proposition with one is the most likely, as in (24b) (cf. Chierchia 2004:77, “… being stronger entails being less likely”).8 (24b) contradicts (23c), and this contradiction accounts for why (21b) is infelicitous. (24) a. x[P(x)  see(I,x,w)]  x[one(x)  see(I,x,w)] b. w. x[one(x)  see(I,x,w)] ≥likely w. x[P(x) see(I,x,w)] Lahiri’s analysis naturally accounts for why ek bhii is licensed in negative contexts, as in (21a). Adopting Karttunen and Peters’ (1979) scope theory of even summarized in section 2.1, bhii scopes over negation and thus combines with the negative proposition ‘that I didn’t see one man’, as in (25a). The denotation of bhii and the ScalarP are provided in (25b) and (25c), respectively. (25) a. LF : even C [ not [ I saw [one]F ] ] b. [[bhii]] w (C)(p), where p = w.¬x[one(x)  see(I,x,w)] and C  {q: P[q = w.¬x[P(x)  see(I,x,w)]]} C = {that I didn’t see one man, that I didn’t see two men, … , that I didn’t see n men} c. qC [q  w. ¬x[one(x)  see(I,x,w)]  q >likely w. ¬x[one(x)  see(I,x,w)]]

8

See Guerzoni (2003) for cases where p entails q is not a sufficient condition for p is less likely than q.

8

The ScalarP in (25c) is consistent with the meaning of one. We obtain (26a) from (24a) by the law of contraposition, and (26b) follows from (26a). (26b) does not contradict (25c), and thus (21a) is felicitous. (26) a. ¬x[one(x)  see(I,x,w)]  ¬x[P(x)  see(I,x,w)] b. w. ¬x[P(x)  see(I,x,w)] ≥likely w. ¬x[one(x)  see(I,x,w)] Lahiri further extends the present compositional analysis to other contexts where NPIs are licensed and also to the contexts where free choice items are licensed. I return to this point in section 5 below. In sum, Lahiri accounts for why the combination of the weakest predicate (i.e., one) and bhii ‘even’ behaves as an NPI by examining the meaning of the weakest predicate and bhii. Bhii is a focus particle that associates with one, and this leads to a semantic contradiction in positive, but not in negative contexts (see Lee and Horn 1994 and Krifka 1995 for a similar analysis).

4. The Semantics of Japanese One NPIs I now return to the Japanese data and apply Lahiri’s compositional analysis. In section 4.1, I make the following two claims: -mo in one NPIs can be analyzed as even, and all types of one NPIs come with even, overt or covert. Then in section 4.2, I argue that the location of focus is different in the three types of one NPIs and that this difference accounts for the semantic differences among the three discussed in section 2.

4.1.

-Mo as the Focus Particle Even

The Japanese emphatic particle -mo is analogous to Hindi bhii in its interpretation. Just like Hindi example (22), (27) presupposes that Alan saw someone besides Bill when there is no emphatic focus on Bill. When Bill is stressed, an additional presupposition arises, namely, Bill is an unlikely person for Alan to see. (27) Alan-wa Bill-mo Alan-TOP Bill-MO

mi-ta. see-PAST

Although -mo (and also Hindi bhii) differs from even in that its primary meaning is additivity and that scalarity often seems to be secondary, there are cases where the ‘even’ interpretation of -mo is readily available. For instance, -mo in (28) evokes scalarity with or without prosodic emphasis on the phrase that -mo is attached to. Note that the scalar interpretation at issue is unavailable in corresponding sentences with genuine additive particles like also. (28)

a. Saru-mo ki-kara otiru. monkey-MO tree-from fall ‘Even a monkey falls from a tree.’ b. Sono utyuusen-de tuki-ni-mo ik-eru. that spaceship-by moon-to-MO go-can ‘(We) can even go to the moon by that spaceship.’ c. Sono kitte-wa senmonten-ni-mo utte-nakat-ta. that stamp-TOP specialty shop-at-MO sell-NEG-PAST 9

‘That stamp wasn’t sold even at the specialty shop.’ When -mo attaches to a phrase that expresses some quantity, a scalar interpretation is always present, with or without prosodic emphasis (Numata 1992). More specifically, the relevant quantity is considered to be large in positive contexts, while ambiguity between large and small readings arises in negative contexts (see Nakanishi 2008b for details). For instance, the positive sentence in (29) can be used in the following scenario: Bill invited five people last week, and this week he invited as many as ten people. In this scenario, the speaker presents ten as a large number. Thus, -mo attached to a numeral must be considered as a scalar particle like even, not simply as an additive particle like also. (29)

Alan-ga [zyuu-nin]F-mo syootaisi-ta. invite-PAST Al-NOM [ten-CL]-MO ‘Alan even invited ten people.’

Before turning to -mo in one NPIs, let us point out a syntactic difference between English even and Japanese -mo. While even can attach to a VP, as in (30a), Japanese -mo must attach to an NP or a PP even when the relevant focus site is a VP (cf. Aoyagi 1994). For instance, -mo in (30b) attaches to the NP syohyoo ‘review’, but the alternatives obtain by replacing the VP with elements of the same type (e.g., C = {Alan read a paper, Alan wrote a review, Alan did a presentation}).9 In the corresponding English sentence in (30a), the focus feature on a review can be inherited by the VP write a review (Selkirk 1984). It has been argued that a focused element must be c-commanded by a focus particle (Jackendoff 1972). In (30a), even does c-command the VP. In Japanese, for -mo to be able to c-command the VP, we need to assume that -mo is attached to the VP or somewhere even higher. If we were to assume that -mo is directly attached to the NP syohyoo ‘review’, we cannot account for the intended reading where -mo associates with the VP. What we obtain instead is a set of alternatives such as follows: {Alan wrote a review, Alan wrote a paper, Alan wrote a letter, ...}. Thus, I assume that -mo attached to an NP (or a PP) on the surface is attached to a higher node at LF, namely, to the entire proposition just like English even. That is, -mo as well as even is a sentential operator. For example, (30a) and the second sentence in (30b) have the same LF, which is given in (30c). (30)

9

a. Alan even wrote [a review]F. b. Alan-wa ronbun-o yon-da. Sosite [syohyoo]F-mo Alan-TOP paper-ACC read-PAST then [review]-MO

kai-ta. write-PAST

Alternatively, -mo may attach to a nominalized verb kaki ‘writing’, as in (i). In this example, the nominalized verb and its internal argument syohyoo ‘review’ are considered to form a constituent because they must be extraposed together, as shown in (ii). Thus, -mo associates with writing a review rather than with writing by itself, and the alternative propositions obtain by replacing writing a review with elements of the same type (e.g., reading a paper). (i) Alan-wa ronbun-o yon-da. Sosite syohyoo-o kaki-mo si-ta. Alan-TOP paper-ACC read-PAST then review-ACC writing-MO do-PAST ‘(lit.) Alan read a paper. Then he even did [writing a review]F.’ (ii) a. [Syohyoo-o kaki]F-mo Alan-wa si-ta. [review-ACC writing]-MO Alan-TOP do-PAST ‘Even [writing a review]F, Alan did.’ b. * [Kaki]F-mo Alan-wa syohyoo-o si-ta. [writing]-MO review-ACC Alan-TOP do-PAST

10

‘Alan read a paper. Then he even wrote [a review]F.’ c. LF: even C [ Alan wrote [a review]F ] In sum, -mo corresponds to English even when it attaches to a focused element, as in (27) and (30) below, or to a numeral, as in (29). In some cases, the ‘even’ interpretation is available without these conditions, as in (28). Moreover, -mo is considered to be a sentential operator.

4.2.

-Mo in One NPIs

In this section, I examine the nature of -mo in one NPIs. The following two questions are addressed: can -mo in one NPIs be treated as even, and if the answer is positive, what does -mo associate with? Recall now that numerals in Japanese can be categorized into three types, as in (31) (same as (5) except for the brackets). (31) Type I:

Alan-wa kooen-de inu-o ip-piki Alan-TOP park-at dog-ACC one-CL ‘Alan saw one dog at the park.’ Type II: Alan-wa kooen-de [ip-piki-no inu]-o Alan-TOP park-at one-CL-GEN dog-ACC Type III: Alan-wa kooen-de [inu ip-piki]-o Alan-TOP park-at dog one-CL-ACC

mi-ta. see-PAST mi-ta. see-PAST mi-ta. see-PAST

It has been widely accepted that the numeral and the quantified NP form a nominal constituent in Type II and III, as indicated by the brackets, while the relation between the numeral and the quantified NP in Type I is much more controversial (see Watanabe 2006, Nakanishi 2008c, among others). Traditionally, the numeral in Type I in (31) is called a floating quantifier (FQ) in that it can appear away from the associated NP. More specifically, while some element such as adverbs and PPs can intervene between the NP and the FQ (or the numeral in Type I in our term), such a configuration leads to ungrammaticality in the case of numerals in Type II and III, as in (32). Moreover, the FQ in Type I in (31) may scramble to any location of the sentence (except for the sentence final position, which is reserved for a verb), whereas such a scrambling is impossible in Type II and III, as in (33) (Miyagawa 1989). (32) Type I:

Alan-wa inu-o kooen-de ip-piki dog-ACC one-CL Alan-TOP park-at ‘Alan saw one dog at the park.’ Type II: * Alan-wa ip-piki-no kooen-de inu-o Alan-TOP one-CL-GEN park-at dog-ACC Type III: * Alan-wa inu kooen-de ip-piki-o one-CL-ACC Alan-TOP dog park-at

(33) Type I:

Ip-piki Alan-wa kooen-de inu-o one-CL Alan-TOP park-at dog-ACC ‘Alan saw one dog at the park.’ Type II: * Ip-piki-no Alan-wa kooen-de inu-o one-CL-GEN Alan-TOP park-at dog-ACC

11

mi-ta. see-PAST mi-ta. see-PAST mi-ta. see-PAST mi-ta. see-PAST mi-ta. see-PAST

Type III: * Ip-piki-o one-CL-ACC

Alan-wa kooen-de inu Alan-TOP park-at dog

mi-ta. see-PAST

The same syntactic differences can be observed with the three types of one NPIs in (34). For instance, while the numeral one in Type I and the quantified NP can have an intervening element, such a configuration is impossible in Type II and III, as shown in (35). (34) Type I:

Alan-wa kooen-de inu-o ip-piki-mo Alan-TOP park-at dog-ACC one-CL-MO ‘Alan didn’t see any dog(s) at the park.’ Type II: Alan-wa kooen-de [ip-piki-no inu]-mo Alan-TOP park-at one-CL-GEN dog-MO Type III: Alan-wa kooen-de [inu ip-piki](*-mo) dog one-CL(-MO) Alan-TOP park-at

(35) Type I:

Alan-wa inu-o kooen-de ip-piki-mo Alan-TOP park-at dog-ACC one-CL-MO ‘Alan didn’t see any dog(s) at the park.’ Type II: * Alan-wa ip-piki-no kooen-de inu-mo dog-MO Alan-TOP one-CL-GEN park-at Type III: * Alan-wa inu kooen-de ip-piki one-CL Alan-TOP dog park-at

mi-nakat-ta. see-NEG-PAST mi-nakat-ta. see-NEG-PAST mi-nakat-ta. see-NEG-PAST mi-nakat-ta. see-NEG-PAST mi-nakat-ta. see-NEG-PAST mi-nakat-ta. see-NEG-PAST

This shows that there is a structural difference between Type I on the one hand and Type II and III on the other. In particular, the numeral and the quantified NP are in the same nominal phrase in Type II and III, but not in Type I (at least on the surface). Recall now that the particle -mo must attach to an NP or a PP, as discussed in section 4.1. This suggests that -mo attaches to the numeral one in Type I, and it attaches to the complex NP formed by one and the quantified NP in Type II. Type III is a special case where the overt presence of -mo is prohibited. In the following, I examine each type of one NPIs in more detail and show that all three types involve even. 4.2.1. Type I One NPI We have seen that -mo in Type I attaches to the numeral one, but not to the complex expression formed by one and the quantified NP. The question now is the nature of -mo: does -mo in this configuration correspond to English even? There are several reasons to believe that the answer to this question is positive. First, as shown in (29), -mo attached to a numeral always yields a scalar interpretation. In Type I one NPIs, -mo directly attaches to the numeral one, which yields the same configuration as (29). Second, recall that Hindi bhii is semantically analogous to Japanese -mo in that it is ambiguous between also and even. Furthermore, both emphatic particles may attach to the predicate one and the resulting complex expression serves as an NPI. These crosslinguistic similarities suggest that Lahiri’s (1998) analysis of bhii presented in section 3.2 should extend to -mo. More specifically, if bhii corresponds to even, it is reasonable to assume that -mo also corresponds to even. Third, the prosodic pattern of one preceding -mo in NPIs differs from that of one in other contexts. Compare the NPI examples in (36) with the non-NPI examples in (37). 12

(36) a. Alan-wa otokonoko-o hito-ri-mo Alan-TOP boy-ACC one-CL-MO ‘Alan didn’t see any boy(s).’ b.* Alan-wa otokonoko-o hito-ri-mo Alan-TOP boy-ACC one-CL-MO ‘Alan saw any boy(s).’

mi-nakat-ta. see-NEG-PAST

(37) a. Alan-wa otokonoko-o hito-ri Alan-TOP boy-ACC one-CL-MO ‘Alan didn’t see one boy.’ b. Alan-wa otokonoko-o hito-ri one-CL Alan-TOP boy-ACC ‘Alan saw one boy.’

mi-nakat-ta. see-NEG-PAST

mi-ta. see-PAST (= (3))

(cf. (7a)) mi-ta. see-PAST (cf. (5a))

Japanese is a pitch-accent language that has high and low tones, and accents are realized as a falling tone (i.e., a high-low sequence). Hito-ri ‘one-CL’ is generally accented in that there is a fall between to and -ri. Hito-ri in (37) has this accent pattern. In contrast, when hito-ri is followed by -mo and behaves as an NPI, as in (36), the standard accent pattern is lost. In particular, hito-ri in (36) is unaccented. Recall that -mo introduces a scalar interpretation when it attaches to an element with a prosodic emphasis (see section 4.1). The special prosodic pattern on one described above may be a way of signaling that -mo requires a focus-sensitive interpretation, namely, the even interpretation. Taking a step further, we may argue that the special prosodic pattern signals or emphasizes the fact that one is the weakest possible predicate (or at the absolute bottom of the scale) (Nakanishi 2008a). In the examples discussed so far, the cardinality predicate one is the weakest in its own meaning because the proposition with one is logically entailed by the propositions with other cardinality predicates in positive contexts, as shown in (24a). There cannot be any proposition that ‘that I saw one boy’ entails. However, one is not necessarily the weakest predicate in examples such as (38a). (38a) is naturally uttered in the context such as follows: Alan does not drink much and so having half a glass of wine is usually enough for him. However, he was in a good mood and so drank as much as a glass of wine. (38)

a. Alan-wa wain-o ip-pai-mo non-da. Alan-TOP wine-ACC one-glass-MO drink-PAST ‘(lit.) Alan even drank one glass of wine.’ b. ScalarP: ‘that Alan drank one glass of wine’ is the least likely among the alternatives ‘that Alan drank n glass of wine’

In this sentence, -mo introduces the ScalarP in (38b). This presupposition is met when alternatives include amounts smaller than one glass (e.g., half a glass, one third of a glass, etc.). For instance, ‘that Alan drank one glass’ logically entails ‘that Alan drank half a glass’, thus the former is less likely than the latter (assuming that if p entails q, p is less likely than q: see section 3.2). In this way, to satisfy the ScalarP of -mo in (38), it is crucial to assume that Alan may drink a smaller amount than one glass of wine, and thus one is not the weakest predicate. Interestingly, this example is judged to be infelicitous when ip-pai has the special unaccented pattern. This is

13

explained if we assume that the special pattern on one signals that one is the weakest predicate.10 One is considered to be the weakest when the proposition with one is entailed by the propositions with other quantity, which makes the proposition with one the most likely. This is inconsistent with the ScalarP in (38b), and so (38a) is correctly predicted to be infelicitous when ip-pai is unaccented. Based on these observations, I assume that -mo in Type I one NPIs corresponds to English even and that -mo in this configuration associates with the predicate one. 4.2.2. Type II One NPI -Mo in Type II one NPIs differs from -mo in Type I one NPIs in that the former combines with one NP, while the latter combines only with one. Regardless of this configurational difference, I argue that -mo in both types should be treated as even. A piece of evidence comes from a prosodic pattern of one: the prosodic difference observed between (36) and (37) also hold between Type II one NPIs in (39) and Type II numerals in (40). More specifically, the standard accented pattern on hito-ri ‘one-CL’ is found in (40), but not in (39). The special unaccented pattern must be used in (39). Just as in Type I one NPI, The fact that -mo in Type II one NPI affects a prosodic pattern of the expression that it associates with may indicate that -mo in these contexts are focus-sensitive. Since -mo used in focus-sensitive contexts corresponds to even, as shown in section 4.1, it is plausible to assume that -mo in Type II one NPI, being used in a focusaffected context, corresponds to even. The argument here needs to be taken with caution, however, because the prosodic effect reported here seems to be limited to predicates of a minimal amount (e.g., one, a little) and indeterminates (see section 4.2.4 below). When -mo is attached to a regular NP, there is no change on pitch-accent, and the prominence is simply expressed by loudness and/or pitch height. (39) a. Alan-wa [hito-ri-no otokonoko]-mo Alan-TOP one-CL-GEN boy-MO ‘Alan didn’t see any boy(s).’ b.* Alan-wa [hito-ri-no otokonoko]-mo Alan-TOP one-CL-GEN boy-MO ‘Alan saw any boy(s).’

mi-nakat-ta. see-NEG-PAST

(40) a. Alan-wa [hito-ri-no otokonoko]-o Alan-TOP one-CL-GEN boy-MO-ACC ‘Alan didn’t see one boy.’ b. Alan-wa [hito-ri-no otokonoko]-o Alan-TOP one-CL-GEN boy-ACC ‘Alan saw any boy(s).’

mi-nakat-ta. see-NEG-PAST

mi-ta. see-PAST (= (4))

(cf. (7b)) mi-ta. see-PAST (cf. (5b))

Just as argued for Type I one NPI, we may assume that the special prosodic pattern on one NP indicates that one NP is the weakest or at the absolute bottom of the scale. That is, in positive contexts the proposition with one NP is entailed by all the other alternative propositions, and the opposite holds in negative contexts. This point is discussed in more detail in section 4.3.2.

10

The claim here is somewhat analogous to the claim that English any triggers domain widening in Kadmon and Landman’s (1993) sense only if any is stressed. See Krifka (1995) for details.

14

Supporting evidence for the prosodic argument comes from the fact that (39b) becomes felicitous when hito-ri has the standard accented pattern. In this case, -mo is interpreted as the additive particle also. For example, suppose that the speaker is listing people that Alan saw and that (39b) is preceded by the sentence Alan saw five girls. In this context, (39b) can be felicitously used to convey the ‘also’ reading that Alan also saw one boy.11 4.2.3. Type III One NPI Type III one NPI differs from Type I and II one NPIs in that the former is incompatible with the overt presence of -mo. For example, in (41) -mo cannot appear in the post-numeral position, which is the most plausible position for -mo, nor can it appear in any other location. Another difference between the two groups of one NPIs is that, while one has a special prosodic pattern in Type I and II, no prosodic change is observed in Type III. As we have seen above, hito-ri ‘oneCL’ is generally accented, and in the case of Type III, this pattern holds both in NPI contexts in (41) and non-NPI contexts in (42). Thus, there is no prosodic argument for the existence of even in Type III. However, on the basis of different grounds, I claim in the following that Type III one NPI comes with a silent even. (41) a. Alan-wa [otokonoko hito-ri](*-mo) Alan-TOP boy one-CL(-MO) ‘Alan didn’t see any boy(s).’ b.* Alan-wa [otokonoko hito-ri](-mo) one-CL(-MO) Alan-TOP boy ‘Alan saw any boy(s).’ (42) a. Alan-wa [otokonoko hito-ri]-o Alan-TOP boy one-CL-ACC ‘Alan didn’t see one boy.’ b. Alan-wa [otokonoko hito-ri]-o Alan-TOP boy one-CL-ACC ‘Alan saw one boy.’

mi-nakat-ta. see-NEG-PAST mi-ta. see-NEG-PAST mi-nakat-ta. see-NEG-PAST (cf. (7c)) mi-ta. see-PAST (cf. (5c))

In terms of semantics, recall that Type II and III one NPIs can be used in the same contexts (see section 2).12 In section 4.3 below, extending Lahiri (1998), I show that the semantics of even plays a crucial role to account for the semantics of one NPIs. The presence of even is apparent in Type II one NPI, as discussed in the previous subsection. Although Type III one NPI resists the presence of -mo, it may be possible to argue for a covert even component. Positing a silent even is not inconceivable: it has been independently proposed that some NPIs in English, such as the

11

12

Note that there are cases where one is accented in a positive sentence like (39b) and yet the scalar interpretation is triggered by -mo. For instance, (i) is infelicitous when ip-piki is unaccented, but it is felicitous when ip-piki is accented, just like (39b). When acceptable, (i) permits the additive interpretation as well as the scalar interpretation that one lion is an unlikely animal to be seen at the park. I come back to this point in section 4.3.2. (i) Alan-wa kooen-de [ip-piki-no laion]-mo mi-ta. Alan-TOP park-at one-CL-GEN lion-MO see-PAST ‘Alan also/even saw one lion at the park.’ There is at least one semantic difference between Type II and III one NPIs: the latter, but not the former, can have an idiomatic interpretation. See section 6 for some examples.

15

ones in (43), come with a silent even (which may be optionally expressed overtly) (Heim 1984, Guerzoni 2003). (43) a. Alan didn’t (even) lift a finger to help Bill. b. Alan didn’t (even) have a single bite. I provide two pieces of evidence to support the assumption that Type III one NPI comes with a silent even. First, although Type III one NPI cannot be followed by -mo, it can appear with -sae, another focus particle that semantically corresponds to English even. (44) shows that, when attached to a focused element, both -mo and -sae can introduce the ScalarP that Bill is the least likely person for Alan to see among the alternatives (see the ScalarP in (18)). (45) shows that Type III one NPI is compatible with -sae, although it resists -mo. (44) Alan-wa [Bill]F{-mo/-sae} mi-ta. Alan-TOP Bill{-MO/-SAE} see-PAST ‘Alan even saw Bill.’ (45) Alan-wa [otokonoko hito-ri]{-sae/*-mo} mi-nakat-ta. Alan-TOP boy one-CL{-SAE/-MO} see-NEG-PAST ‘Alan didn’t see any boy(s).’ Second, in some limited context, Type III one NPI can occur with yet another focus particle -demo. Just like -mo and -sae, -demo (at least in some contexts) correspond to English even, as in (46).13 -Demo introduces the ScalarP that John is the least likely person to buy a book. The same ScalarP obtains when -mo or -sae is used instead of -demo. (46)

John-demo hon-o kat-ta. John-DEMO book-ACC buy-PAST ‘Even John bought a book.’

(Kuroda 1965:82)

Recall that Type I and Type II one NPIs are formed with -mo when they are licensed in negative sentences. However, in other contexts where NPIs are licensed (e.g., the antecedent of conditionals, the restrictor of universal quantifiers), -demo appears with these NPIs instead of -mo, as in (47) (see Nakanishi 2006 for the comparison between -mo and -demo; see also section 5 below). In the antecedent of conditionals, Type III one NPI must be followed by -demo, just like Type I and II one NPIs, as shown in (48). (47) Type I:

13

Pan-o iti-mai-demo tabe-ta-ra okoru-yo. get angry-EMP bread-ACC one-CL-DEMO eat-PAST-if ‘(lit.) If you even eat one slice of bread, I’ll get mad.’

-Demo can be morphologically decomposed into the copular verb -de followed by -mo. However, it is not clear whether this decomposition is necessary to account for examples such as (46). Thus, here I treat -demo as a nondecomposable lexical item, and ignore subtle semantic differences between -mo and -demo. Moreover, I ignore another use of -demo exemplified in (i), where -demo represents tea as an example of typical drink. (i) Otya-demo nomimasu-ka? tea-DEMO drink-Q ‘Would you like to have some tea or something?’

16

= If you eat any bread, I’ll get mad. Type II: ? [Iti-mai-no pan]-demo tab-ta-ra okoru-yo. one-CL-GEN bread-MO eat-PAST-if get angry-EMP ‘(lit.) If you even eat one slice of bread, I’ll get mad.’ = If you eat anything, I’ll get mad. (48) Type III: [Pan iti-mai]-demo tabe-ta-ra ororu-yo. get angry-EMP bread one-CL-DEMO eat-PAST-if ‘(lit.) If you even eat one slice of bread, I’ll get mad at you.’ = If you eat anything, I’ll get mad. Based on the two empirical observations presented above, I assume that Type III one NPI comes with a silent even. The fact that -mo cannot overtly appear in Type III one NPI may be considered as a morphological discrepancy in Japanese. Indeed, in Korean, the language that has the same three types of one NPIs, the even item -to is able to appear in the NPI that corresponds to Type III one NPI, as shown in (49) (Kyumin Kim, p.c.). Many speakers seem to consider -to in Type III one NPI obligatory, while there are some speakers who can optionally drop it. The same meaning difference between Type I and Type II / III is observed here: the former means that Alan saw no dogs, and the latter two mean that Alan saw nothing at all. (49)

Type I:

Alan-un kangaci-lul han-mali-to Alan-TOP dog-ACC one-CL-TO ‘Alan didn’t see any dog(s) at the park.’ Type II: Alan-un han-mali-uy kangaci-to Alan-TOP one-CL-GEN dog-TO Type III: Alan-un kangaci han-mali(-to) Alan-TOP dog one-CL(-TO)

{po-ci mos ha-ssta / *po-assta}. {see-CI not do-PAST / see-PAST} {po-ci mos ha-ssta / *po-assta}. {see-CI not do-PAST / see-PAST} {po-ci mos ha-ssta / *po-assta}. {see-CI not do-PAST / see-PAST}

The discussion here shows that there is a connection between the presence of -mo and the prosodic pattern. In particular, one retains a special unaccented pattern only if it is followed by non-additive -mo. 14 Type III one NPI lacks -mo, and thus one in Type III has the ordinary prosodic pattern. The covert existence of even does not seem to be sufficient to trigger the special intonation of one. This may be related to the fact that in Japanese, having a prosodic stress is not sufficient to place the target proposition at the (near-)end of the scale. In English, examples such as (50a) evoke the presupposition that Albert Schweitzer is the most trustworthy person and that the focused element can be modified by even without changing the meaning (cf. Fauconnier’s (1975a, b) quantificational superlatives). The corresponding Japanese example in (50b) requires an overt emphatic particle such as -mo ‘also, even’ or -sae ‘even’ to introduce the scalar interpretation. When the focused element is simply followed by the accusative marker -o, no such presupposition is present. (50) a. Alan would (even) distrust [Albert Schweitzer]F!

14

The presence of non-additive -mo is a necessary condition for the special prosodic pattern of one, but it is not a sufficient condition. This is because there are cases where the combination of one and the non-additive -mo is felicitous without the special prosodic pattern, as shown in (38) and (i) in footnote 11.

17

b. Alan-wa [Albert Schweitzer]F{-o / -mo / -sae} sinzi-nai-daroo. Alan-TOP Albert Schweitzer{-ACC / -MO / -even} believe-NEG-would 4.2.4. Indeterminates and Universal -Mo As shown in section 1, the combination of the indeterminate dare ‘who’ and -mo is acceptable in a negative sentence, but not in a positive sentence, as in (51). An indeterminate pronoun is identical to a wh- item, but it does not have an interrogative interpretation inherently (Kuroda 1965). It must associate with an operator and yields various interpretations depending on which operator it associates with (but see Hiraiwa and Nakanishi 2008 for cases where there is no overt licenser). (51) a. Dare-mo ko-na-katta. who-MO come-NEG-PAST ‘(lit.) Anyone didn’t come.’ = Nobody came b. *Dare-mo ki-ta. who-MO come-PAST The indeterminate dare ‘who’ is generally accented (there is a fall between da and re). For instance, dare is accented when it is interpreted as a wh-word, as in (52a), or as a universal quantifier, as in (52b). In contrast, when used as an NPI, as in (51a), dare is unaccented. This is the same contrast as the one found between the NPI one + -mo and non-NPI one + -mo (sections 4.2.1 and 4.2.2). (52) a. Dare-ga kimasi-ta-ka? who-MO come-PAST-Q ‘Who came?’ b. Dare-mo-ga ki-ta. who-MO-NOM come-PAST ‘Everyone came.’ Notice that the universal interpretation of the indeterminate in (52b) obtains with -mo. In the literature, -mo in this use is treated as a universal quantifier that combines with the indeterminate and then with the rest of the sentence (Nishigauchi 1990, von Stechow 1996, Shimoyama 2001, 2006). There are at least three differences between -mo in NPIs and the universal quantifier -mo. First, as shown above, -mo in NPIs triggers a different prosodic pattern from the universal -mo. Second, the universal -mo must be followed by a case marker, while -mo in NPIs must not. Third, while the universal -mo can be apart from an indeterminate pronoun (Shimoyama 2003, 2006, among others), -mo in NPIs must be adjacent to an indeterminate. Thus, I assume that -mo in NPIs is different in nature from the universal quantifier -mo, and that only the former corresponds to English even.15

15

Shimoyama (2008) argues that -mo in NPIs is the universal quantifier by showing that there are instances of NPIs with -mo that can only be analyzed as wide scope universals rather than narrow scope existentials. For lack of space, I do not discuss her analysis here.

18

4.2.5. Summary In this section, I first showed that -mo in Type I / II one NPIs corresponds to English even. I then showed that Type III one NPI is considered to have a covert even component. Thus, all three types of one NPIs come with even, overt or covert, as schematized in (53). I also showed that -mo in one NPIs differs from the universal quantifier -mo. (53) Type I one NPI: Type II one NPI: Type III one NPI:

NP-CASE [one-CL]F-MO [one-CL-GEN NP]F-MO [NP one-CL]F-EVEN

In Lahiri’s (1998) analysis summarized in section 3.2, bhii that follows ek ‘one’ in NPIlicensing contexts is considered to correspond to English even, because ‘NPIs in Hindi are focused’ (p.59) and bhii in focus-affected contexts means ‘even’. However, Lahiri does not discuss in what sense NPIs in Hindi are focused. Japanese has the same complex expression as Hindi NPIs, namely, one or one NP followed by -mo. -Mo is like bhii in that it has the ‘even’ meaning when it attaches to a focused element. In this section, I showed that the notion of focus is involved in Japanese one NPIs by examining prosodic patterns and alternations with other focus particles.

4.3.

Analysis

Having established the claim that all three types of one NPIs include either an overt or covert even, the next step is to examine the source of the semantic differences between Type I and Type II / III one NPIs discussed in section 2. First, the NP in Type II / III one NPIs must express the most plausible element in the relevant context, while no such restriction is observed with Type I one NPI. Second, Type II / III one NPIs mean ‘not … anything/anybody’ when licensed in negative contexts, whereas Type I one NPI means ‘not … any NP(s)’. I argue that these semantic differences are explained by extending Lahiri’s (1998) compositional analysis. In particular, the semantics of even and the location of focus that even associates with account for the two semantic properties at issue. In section 4.3.1, I examine Type I one NPI, where Lahiri’s analysis of Hindi NPIs directly applies. Then in section 4.3.2, Type II / III one NPIs are examined. These NPIs are different from Type I (as well as from Hindi NPIs) in that their focus site is the entire NP (i.e., one NP) rather than just the cardinality predicate one. 4.3.1. Type I One NPI In section 4.2.1, I showed that the focus association of Type I one NPI is just like that of Hindi NPIs, that is, the focus particle even (bhii in Hindi and -mo in Japanese) associates with the cardinality predicate one. Thus, Lahiri’s (1998) analysis of Hindi NPIs should directly extend to Type I one NPI, repeated in (54) (with the addition of [ ]F). (54) Type I:

a. Alan-wa kooen-de inu-o [ip-piki]F-mo Alan-TOP park-at dog-ACC one-CL-MO ‘Alan didn’t see any dog(s) at the park.’ b.* Alan-wa kooen-de inu-o [ip-piki]F-mo Alan-TOP park-at dog-ACC one-CL-MO

19

mi-nakat-ta. see-NEG-PAST mi-ta. see-PAST

(= (8))

Alternatives are created by replacing one with other cardinality predicates. In the case of positive contexts, even introduces the ScalarP that the target proposition ‘that Alan ate saw one dog’ is the least likely proposition. This presupposition conflicts with the semantics of one that the proposition with one is the most likely (see (24)). As discussed in section 4.2.1, the special unaccented prosodic pattern on one indicates that one is the weakest predicate, hence the most likely. This conflict is resolved in negative contexts. The relevant LF and the alternatives are given in (55a) and (55b), respectively. ScalarP in this case would be that the target proposition ‘that Alan didn’t see one dog’ is the least likely proposition, as in (55c), i.e., ‘that Alan saw one dog’ is the most likely proposition due to the scale-reversal property of the negation. This presupposition is in harmony with what one means. (55) a. LF: even C [ not [ Alan saw [one]F dog ] ] b. [[even]]w (C)(p), where p = w. x[one(x)  dog(x)  see(a,x,w)] and C  {q: P[q = w. x[P(x)  dog(x)  see(a,x,w)]} E.g. C = {that Alan didn’t see one dog, that Alan didn’t see two dogs, … , that Alan didn’t see n dogs} c. qC [q  w. ¬x[one(x)  dog(x)  see(a,x,w)]  q >likely w. ¬x[one(x)  dog(x)  see(a,x,w)]] Recall that even (or -mo) has no truth-conditional contributions, thus the truth condition of the sentence in (54a) is equivalent to the sentence without -mo, namely, (56). Note that (56) is ambiguous between the two readings given in (56a) and (56b).16 (56) Alan-wa kooen-de inu-o ip-piki Alan-TOP park-at dog-ACC one-CL ‘Alan didn’t see one dog at the park.’ a. x[one(x)  dog(x)  see(a,x,w)] b. x[one(x)  dog(x)  see(a,x,w)]

mi-nakat-ta. see-NEG-PAST (= (7a))

Clearly, (54a) lacks the reading in (56b). The question then is why (54a) disallows the LF where the existential-closure over the indefinite one dog is above negation. Even scopes over one because a focus particle must c-command a focused element (Jackendoff 1972: see also section 4.1). Furthermore, under the scope theory of even that we adopt here (see section 3.1), even must scope over negation, which leads to the following two logically possible LF representations: even » ¬ » one and even » one » ¬. While the former LF has no semantic problems, as shown in (55), the latter is problematic. The ScalarP of even says that ‘that there is one dog that Alan didn’t see’ is the least likely among the alternatives of the form ‘that there are n dogs that Alan didn’t see’ (n>1). This ScalarP is never met because having one unseen dog is always entailed by having n unseen dogs (n>1), thus the former is more likely than the latter.17 16

17

Example (56) only has the reading in (56b) if there is a prosodic boundary between the NP dog and the predicate one (cf. Nakanishi 2008c). I assume here that there is no prosodic boundary between the two, and thus (56) is taken to be ambiguous. The proposed analysis would predict that the reading such as (56b) is available when the ScalarP makes sense. This prediction is borne out, as shown in (i). (i) is ambiguous between the two readings described in (56): Alan saw less than five dogs and there are five dogs that Alan didn’t see. See Nakanishi (2008b) for the analysis of examples such as (i).

20

In Lahiri’s (1998) analysis, one is a predicate that is true of everything that exists, thus (56a) has the same truth conditions as (57). Then (54a) asserts that Alan saw no dogs. (57)

x[dog(x)  see(a,x,w)]

Even if (56a) cannot be reduced to (57) for some reason (see section 4.3.2 below), we can still account for the truth conditions of (54a) (namely, Alan saw no dogs). The proposition ‘that Alan didn’t see one dog’ is the strongest among the alternatives of the form ‘that Alan didn’t see n dog’ in that it logically entails all the other alternative propositions. Thus, when ‘Alan didn’t see one dog’ is true (as asserted by (54a)), all the other propositions are true, that is, ‘Alan didn’t see n dogs’ (n>1) is true as well. Thus, (54a) asserts that Alan saw no dogs. The present analysis based on Lahiri’s is able to account for the properties of Type I one NPI discussed in section 2. First, (54a) is compatible with the scenario where Alan saw no dogs, but saw some other animals. Even (-mo) in (54a) associates with the cardinality predicate one, and so the sentence is about the cardinality of dogs. Put differently, it does not say anything about other animals. Thus, it is felicitous in the scenario where Alan saw other animals, as long as he saw no dogs. Second, while Type II / III one NPIs must occur with an NP that is the most plausible in the context (e.g., bread vs. steak), no such restriction is observed with Type I one NPI. Any NP can be used in this type because an NP is not included in the focus site of even. This point becomes clear when Type II / III one NPIs are examined, which I turn to next. 4.3.2. Type II / III One NPIs Both in Type II and III one NPIs, (overt or covert) even associates with the entire noun phrase one NP, rather than just with the cardinality predicate, as indicated by [ ]F in (58) and (59). Regarding Type II one NPI in (58), we are restricting ourselves to the cases where ip-piki has the special unaccented pattern, as discussed in section 4.2.2. (58) Type II:

a. Alan-wa kooen-de [ip-piki-no inu]F-mo Alan-TOP park-at one-CL-GEN dog-MO ‘Alan didn’t see any dog(s) at the park.’ b.* Alan-wa kooen-de [ip-piki-no inu]F-mo Alan-TOP park-at one-CL-GEN dog-MO

(59) Type III: a. Alan-wa kooen-de [inu ip-piki]F(-EVEN) Alan-TOP park-at dog one-CL(-EVEN) ‘Alan didn’t see any dog(s) at the park.’ b.* Alan-wa kooen-de [inu ip-piki]F(-EVEN) Alan-TOP park-at dog one-CL(-EVEN)

mi-nakat-ta. see-NEG-PAST mi-ta. see-PAST

(= (9))

mi-nakat-ta. see-NEG-PAST mi-ta. see-PAST

(= (10))

The following observations need to be explained here. First, just like Type I one NPI, Type II / III one NPIs are unacceptable in positive sentences, but acceptable in negative sentences. Second, as discussed in section 2, (58a) and (59a) are incompatible with the scenario where Alan saw no

(i)

Alan-wa kooen-de inu-o [go-hiki]F-mo Alan-TOP park-at dog-ACC five-CL ‘Alan didn’t see five dogs at the park.’

21

mi-nakat-ta. see-NEG-PAST

dogs, but saw some other animals. Moreover, Type II / III one NPIs must occur with an NP that is the most plausible in the context (e.g., bread vs. steak). Let us start with the positive sentences in (58b) and (59b). As discussed in the previous subsection, even in Type I one NPI associates with the cardinality predicate one, thus alternatives involve propositions with different cardinality predicates. In contrast, even in Type II / III one NPIs associates with the entire noun phrase one NP, as shown in the LF structure in (60a). Then the alternatives involve propositions where one dog is replaced with indefinite NPs of the same sort (five dogs, one cat, ten cats, one rabbit, etc.), as shown in (60b). The ScalarP of even in (60c) says that the target proposition ‘that Alan saw one dog’ is the least likely among the alternatives. Regarding Type II one NPI, recall that the special unaccented pattern on one NP signals that one NP is the weakest predicate or at the absolute bottom of the scale. This is to say that the proposition with one NP is entailed by all the other alternative propositions in positive contexts. Assuming that p is less likely than q when if p entails q (see section 3.2), the proposition with one NP is the most likely among the alternatives. This is of course inconsistent with the ScalarP of even in (60c) that ‘that Alan saw one dog’ is the least likely. Furthermore, the ScalarP cannot be met as long as the alternatives include the propositions of the form ‘that Alan saw n dogs’ (n>1). Since one is the weakest predicate (see (24a) above), ‘that Alan saw one dog’ can never be less likely than ‘that Alan saw n dogs’. As I show shortly below, the alternatives in (60b) include ‘cardinality predicate + NP’ instead of one dog. If so, it is implausible to exclude the propositions ‘that Alan saw n dogs’ (n>1) from the alternatives, although nothing guarantees their existence. (60)

a. LF: even C [ Alan saw [one dog]F ] b. [[even]]w (C)(p), where p = w. x[one(x)  dog(x)  see(a,x,w)] and C  {q: P[q = w. x[P(x)  see(a,x,w)]} E.g. C = { that Alan saw one dog, … , that Alan saw n dogs, that Alan saw one cat, … , that Alan saw n cats, that Alan saw one rabbit, … , that Alan saw n rabbits } c. qC [q  w. x[one(x)  dog(x)  see(a,x,w)]  q >likely w. x[one(x)  dog(x)  see(a,x,w)]]

Recall that the special unaccented pattern on one in Type I one NPI signals that one is the weakest predicate. It follows that the proposition with one in positive contexts is entailed by all the other alternatives propositions with other cardinality predicates. This is true in terms of logical entailment. For instance, ‘that Alan saw one dog’ is entailed by ‘that Alan saw ten dogs’. In contrast, entailment in Type II and III one NPIs is pragmatic. In Type II and III, when one NP is the weakest, the proposition with one NP in positive contexts must be entailed by all the other alternatives propositions with n NP’. For example, ‘that Alan saw one dog’ must be entailed by ‘that Alan saw one cat’. This is not logical entailment, but merely a pragmatic one; if ‘that Alan saw one cat’ is true, then ‘that Alan saw one dog’ must be true, which makes sense when it is more likely to see dogs than to see cats. That is, for this entailment to work, we require a context here dogs are considered to be the most plausible or typical animal to be seen. It is crucial to assume that one NP is replaced with the indefinites of the form n (cardinality predicate) + NP’, as in (60b). In the following, I show why that is the case. Recall that one in Lahiri’s (1998) analysis is a predicate that is true of everything that exists, thus (61a) is equivalent to (61b) (see section 4.3.1). If we take this view for (58b) and (59b), the alternatives are propositions that obtain by replacing (a) dog with elements of the same type, say, (a) cat and

22

(a) rabbit, as in (62a). Then even introduces the ScalarP in (62b) that the target proposition ‘that Alan saw a dog’ is the least likely among the alternatives. There is no reason why this presupposition cannot be met. For instance, suppose that dogs are not allowed in the park and so it is more common to find a cat or a rabbit than a dog. In this scenario, for Alan to see a dog is the least likely among the alternatives, thus sentences (58b) and (59b) are predicted to be felicitous. However, this prediction is not borne out: (58b) and (59b) are infelicitous regardless of how we manipulate the context. (61)

a. x[one(x)  dog(x)  see(a,x,w)] b. x[dog(x)  see(a,x,w)]

(62)

a. [[even]]w (C)(p), where p = w. x[dog(x)  see(a,x,w)] and C  {q: P[q = w. x[P(x)  see(a,x,w)]} E.g. C = {that Alan saw a dog, that Alan saw a cat, that Alan saw a rabbit} b. qC [q  w. x[dog(x)  see(a,x,w)]  q >likely w. x[dog(x)  see(a,x,w)]]

If one in (58b) and (59b) are “meaningless”, why would you overtly express one in the first place? Crucially, the sentence without one in (63) is acceptable as long as the ScalarP in (62b) is satisfied. Thus, I assume that the meaning contribution of one cannot be simply ignored in (58b) and (59b) (as well as (58a) and (59a) that are examined shortly below) and that the alternatives to one NP must be of the form n (cardinality predicate) + NP’. (63) Alan-wa kooen-de [inu]F-mo mi-ta. Alan-TOP park-at dog-MO see-PAST ‘Alan even saw a dog/dogs at the park.’ Interestingly, when ip-piki ‘one-CL’ is accented, Type II in (58b) is acceptable in the context where dogs are the least likely animal to be seen (see footnote 11). In this case, there is a prosodic boundary between ip-piki ‘one-CL’ and inu ‘dog’. In contrast, when ip-piki is unaccented, there is no boudary between the two. This difference leads us to the assumption that, while -mo associates with one NP in the unaccented pattern, -mo associates only with the preceding NP in the accented pattern, as shown in (64a). The alternatives and the ScalarP of -mo are given in (64b) and (64c), respectively. This would correctly predict (58b) to be acceptable when ‘that Alan saw one dog’ is the least likely among the alternatives. Alternatively, we may ignore the meaning of one, since it is outside the focus site and has no strong meaning contribution, assuming that one is the weakest possible predicate. Then we obtain (65), which also predicts (58b) to be felicitous when dogs are the least likely animal to be seen. (64)

a. LF: even C [ Alan saw one [dog]F ] b. [[even]]w (C)(p), where p = w. x[one(x)  dog(x)  see(a,x,w)] and C  {q: P[q = w. x[one(x)  P(x)  see(a,x,w)]} E.g. C = {that Alan saw one dog, that Alan saw one cat, that Alan saw one rabbit} c. qC [q  w. x[one(x)  dog(x)  see(a,x,w)]  q >likely w. x[one(x)  dog(x)  see(a,x,w)]]

(65)

a. LF: even C [ Alan saw [dog]F ] 23

b. [[even]]w (C)(p), where p = w. x[dog(x)  see(a,x,w)] and C  {q: P[q = w. x[P(x)  see(a,x,w)]} E.g. C = {that Alan saw dogs, that Alan saw cats, that Alan saw rabbits} c. qC [q  w. x[dog(x)  see(a,x,w)]  q >likely w. x[dog(x)  see(a,x,w)]] I now turn to the negative sentences in (58a) and (59a). Assuming the scope theory of even, even combines with the negative proposition ‘that Alan didn’t see one dog’, as in (66a), and yields the ScalarP that this proposition is the least likely among the alternatives, as in (66c). As discussed above, the alternatives obtain by replacing one dog with elements of the same type, such as three dogs and two cats. Then the alternative propositions include propositions such as ‘that Alan didn’t see three dogs’ and ‘that Alan didn’t see two cats’, as in (66b). Put differently, the ScalarP of even says that it is more likely for Alan to see one dog than to see three dogs or two cats. This presupposition is met only if dogs are considered to be the most likely animal for Alan to see. This accounts for the intuition that, for (58a) and (59a) to be sensible, we must assume that the park is popular for dogs. (66)

a. LF: even C [ not [ Alan saw [one dog]F ] ] b. [[even]]w (C)(p), where p = w. x[one(x)  dog(x)  see(a,x,w)] and C  {q: P[q = w. x[P(x)  see(a,x,w)]} E.g. C = { that Alan didn’t see one dog, … , that Alan didn’t see n dogs, that Alan didn’t see one cat, … , that Alan didn’t see n cats, that Alan didn’t see one rabbit, … , that Alan didn’t see n rabbits } c. qC [q  w. ¬x[one(x)  dog(x)  see(a,x,w)]  q >likely w. ¬x[one(x)  dog(x)  see(a,x,w)]]

The special unaccented pattern on one NP in Type II signals that one NP is the weakest predicate. This means that the negative proposition with one NP is the strongest. For instance, ‘that Alan didn’t see one dog’ entails ‘that Alan didn’t see n NP’, hence the former is less likely than the latter. This is consistent with the ScalarP of even in (66c). We further need to account for the fact that (58a) and (59a) mean that Alan saw nothing, not just no dogs. Suppose that p is less likely than q is a sufficient condition for p entails q (Guerzoni 2003:96-97). Here we are talking about pragmatic entailment rather than logical one. According to the ScalarP of even, the target proposition ‘that Alan didn’t see one dog’ is the least likely, hence it entails all the other alternative propositions. Examples (58a) and (59a) assert that this proposition is true, then it follows that all the other propositions are true. More specifically, (58a) and (59a) asserts that Alan didn’t see one dog, which guarantees that Alan didn’t see three dogs, two cats, ten rabbits, etc. Thus (58a) and (59a) end up meaning that Alan saw nothing. This accounts for why (58a) and (59a) are unacceptable when Alan saw no dogs, but saw some other animals. The current analysis is also capable of explaining the contrast between bread and steak discussed in section 2. The relevant examples are repeated in (67). (67) Type I:

Alan-wa [iti-mai-no {pan / ??suteeki}]F-mo Alan-TOP one-CL-GEN {bread / steak}-MO ‘(lit.) Alan didn’t eat any {bread / steak}.’

24

tab-ena-katta. eat-NEG-PAST

Type II:

Alan-wa [{pan / ??suteeki} iti-mai]F(-EVEN) tab-ena-katta. Alan-TOP {bread / steak} one-CL(-EVEN) eat-NEG-PAST ‘(lit.) Alan didn’t eat any {bread / steak}.’

The same analysis as (66) holds for these examples; ‘that Alan didn’t eat one (slice of) {bread / steak}’ is the least likely among the alternatives, or equivalently, for Alan to eat one (slice of) {bread / steak} is the most likely. With one (slice of) bread, the ScalarP is consistent with the meaning of one that the proposition with one is the most likely, and also with our intuition that bread is one of the most typical staples. In contrast, with one steak, the ScalarP is inconsistent with the general assumption that steak is not something that we often eat. This analysis would predict that sentences (67) with steak are felicitous when steak is considered to be a typical food that Alan eats. The prediction is borne out. For instance, (67) can be used to describe the following situation. Alan, who loves steaks and eats them every day, was unable to eat anything, maybe because he was sick.

4.4.

On the Scalar Presupposition of Even18

In the analysis presented so far, I have been assuming the following ScalarP of even: (68) ScalarP:

qC [qp  q >likely p]

(= (18))

According to (68), the target proposition p is the least likely among the alternative propositions. As mentioned in section 3.1, there are two criticisms raised against this definition of the ScalarP. The first is regarding quantificational force of the ScalarP: while some researchers claim that the target proposition is the least likely alternative (Fauconnier 1975a, b, Karttunen and Peters 1979), others argue that it is less likely than most or some particular proposition(s) (Kay 1990, Francescotti 1995). Evidence for the latter view comes from examples such as (69) taken from Kay (1990); the finals are considered to be a less likely alternative than the semi-finals. (69)

Not only did Mary win her first round match, she even made it to [the semi-finals]F.

The second issue is on the likelihood scale: while some assumes that the ScalarP of even is based on likelihood (Karttunen and Peters 1979, Rooth 1985, Wilkinson 1996), others proposes alternative notions such as pragmatic entailment (Fauconnier 1975a, b), informativeness (Kay 1990), noteworthiness (Herburger 2000), and ‘flexible’ scale (Giannakidou 2007). For example, in (70) Manufacturing Consent does not need to be an unlikely book for John to read. (70)

John is a political non-conformist. He even read [Manufacturing Consent]F although it has been banned by the censorship committee. (Rullmann 1997:56)

In the following, I examine whether changing the definition of the ScalarP affects the current analysis of one NPIs. As pointed out by an anonymous reviewer, Lahiri’s (1998) analysis (and the analysis for Type I one NPI that adopts Lahiri’s analysis) remains unaffected regardless of what definition we adopt for the ScalarP of even. In his analysis, one is the weakest predicate, which is true of everything that exists. In a positive sentence, the proposition with one is logically entailed by the 18

The discussion here is based on the valuable comments from an anonymous reviewer.

25

propositions with other cardinality predicates. Assuming that p entails q is a sufficient condition for p is less likely than q (see section 3.2), the proposition with one is always the most likely among the alternatives. The opposite situation holds in a negative context: the proposition with one (e.g., ‘that Alan didn’t see one dog’) entails the other alternative propositions (i.e., ‘that Alan didn’t see n dog’ (n>1)), thus it is the least likely among the alternatives. In this way, regardless of whether the definition of even says “the least” or “less” likely, the proposition with one is always predicted to be at the absolute top or the bottom of the scale by logical entailment. Moreover, the analysis still works even if we replace “likelihood” with other notions like “informativeness” or “noteworthiness”. This is because mere existence expressed by the predicate one is always considered to be at the extreme end of a scale regardless of whether the scale is based on likelihood, informativeness, or noteworthiness. The same argument goes through for the analysis of Type II / III one NPIs presented above. In the proposed analysis, the proposition with one NP is considered to be the weakest among the alternatives of the form n NP’. This is marked by a special unaccented pattern in the case of Type II. Let us first examine positive contexts. For example, ‘that Alan saw one dog’ is the weakest among the alternatives of the form ‘that Alan saw n NP’ (NP for any indefinite) when the former is entailed by all the other alternatives. Assuming again that being stronger in terms of entailment means being less likely in a scale, ‘that Alan saw one dog’ is predicted to be the most likely among the alternative. This is inconsistent with the ScalarP of even regardless of whether the definition says “the least” or “less” likely. More specifically, no matter whether the ScalarP says that the target proposition ‘that Alan saw one dog’ is the least or less likely, the ScalarP contradicts the fact that one NP is the weakest in positive contexts. The opposite holds in negative contexts. If the positive proposition with one NP (e.g., ‘that Alan saw one dog’) is the weakest, the corresponding negative proposition with one NP (e.g., ‘that Alan didn’t see one dog’) is the strongest, hence the least likely. This is consistent with the ScalarP of even in negative contexts, regardless of whether the definition is “the least” or “less” likely. Furthermore, the scale may be based on other notions than likelihood as long as strength in terms of entailment is directly mapped to ranking in the scale. Put differently, the discussion here shows that the proposed analysis does not hinge on the assumption that the ScalarP of -mo makes the target proposition the least likely, rather than less likely than most or some alternative propositions. It is the prosodic emphasis, but not the ScalarP of even, that places the target proposition at the extreme end of the scale. In other words, -mo itself may not specify whether the target proposition is the least or less likely, but the prosodic emphasis makes the target proposition the least likely (cf. Krifka’s (1995) Emph.Assert operator). In the Japanese examples at issue, the prosodic emphasis on one or one NP places the proposition with one or one NP at the absolute bottom of the scale, where there are no weaker propositions. Before closing this section, I briefly discuss an alternative analysis for Type II / III one NPIs. In the proposed analysis, even in Type II / III associates with the entire noun phrase one NP rather than just with the cardinality predicate one, and the alternatives are propositions with different indefinites (cardinality predicate + NP’). Alternatively, one may assume that the alternatives include generalized quantifiers of type <,t> rather than indefinites. For instance, the alternatives in (58a) and (59a) (i.e., Alan didn’t see any dogs) may include propositions with quantifiers beside ones with cardinality predicate + NP’: {that Alan saw most dogs, that Alan saw many cats, that Alan saw all rabbits, ...}. However, there is a problem with this alternative analysis. Suppose that there are contexts where the set of alternatives is restricted to be {that Alan saw one dog, that Alan saw no dog}. As discussed above, the ScalarP of even may simply say that the target proposition is less likely than some alternative proposition (Kay 1990). Then even 26

in (58a) and (59a) evokes the ScalarP that the target proposition ‘that Alan saw one dog’ is less likely than ‘that Alan saw no dog’. This ScalarP is certainly satisfiable, and possibly true in the actual world. For instance, suppose that dogs are prohibited at the park, and thus it is more likely for Alan not to see any dog than to find one. This would predict (58a) and (59a) to be acceptable in the given context, but the fact is that the sentences remain unacceptable under any contexts. In contrast, if we were to consider one NP to be an indefinite, no NP cannot be an alternative to one NP: there is no predicate no such that x[no(x)  dog(x)  see(a,x,w)]. For this reason, I do not adopt here the alternative analysis where one NP is treated as a generalized quantifier.

5. Notes on One NPIs in Downward-Entailing Contexts In this paper, we have only seen examples of NPIs in negative sentences, but it is well known that the distribution of NPIs is much wider. In particular, NPIs occur in semantically restricted contexts that are generally characterized as downward-entailing (DE) contexts (Ladusaw 1979). A DE context is a context that reverses an entailment. Take dogs and animals, where dogs is semantically stronger than animals (in that a set of dogs is a subset of a set of animals). While (71a) entails (71b) (but not vice versa), this entailment gets reversed in negative sentences: (72b) entails (72a) (but not vice versa). These examples show that negative, but not positive, sentences are DE, and thus NPIs can be licensed in negative, but not in positive, sentences (e.g., Alan didn’t see any dog vs. *Alan saw any dog).19 (71) a. Alan saw dogs. b. Alan saw animals. (72) a. Alan didn’t see dogs. b. Alan didn’t see animals. Besides negative sentences, the restrictor of every and the complement of adversative predicates such as surprised are considered to be DE contexts: in (73) and (74), (b) entails (a).20 (73) a. Every boy who saw dogs ran away. b. Every boy who saw dogs ran away. (74) a. Bill was surprised that Alan saw dogs. b. Bill was surprised that Alan saw animals. As shown in (75), the NPI any is licensed in these contexts. Hindi NPI ek bhii is also licensed in these contexts, as in (76). (75) a. Every boy who saw any dog ran away. b. Bill was surprised that Alan saw any dog.

19 20

Some complication arises in the cases where not and an NPI is not in the same clause (see Linebarger 1987, among others). The discussion in this paper is restricted to the cases where not and an NPI are local. The entailment pattern with adversative predicates is somewhat controversial. See Ladusaw (1979), Linebarger (1987), and Kadmon and Landman (1993).

27

(76) a. aisaa har chaatr jisne ek bhii kitaab paRhii, paas ho gayaa such every student who one even book read passed ‘Every student who read any book passed.’ (Lahiri 1998:63) b. mujhe is baat par aaScarya huaa ki ek bhii aadmii tumhaare ghar gayaa me this fact on surprise be that one even person your house went ‘I am surprised that anyone went to your house.’ (Lahiri 1998:72) Lahiri’s analysis (1998) summarized in section 3.2 is capable of accounting for why ek bhii is licensed in DE contexts. Under the scope theory of even, even is allowed to take scope over a DE operator, be it negation, every, or an adversative predicate. As shown above, a DE operator reserves entailment. In Lahiri’s NPI examples, a proposition with ek ‘one’ is the weakest, that is, it is entailed by propositions with other cardinality predicates. With a DE operator, this entailment gets reserved, that is, a proposition with ek ‘one’ becomes the strongest in that it entails all the other alternative propositions. Bhii ‘even’ introduces the ScalarP that the target proposition, i.e., the proposition with ek ‘one’, is the least likely, hence the strongest. This ScalarP is consistent with the meaning of ek ‘one’ only in DE contexts. It follows that ek bhii is licensed only if a DE operator is present, giving a natural explanation for Ladusaw’s (1979) generalization that NPIs are licensed only in DE contexts. In this way, under Lahiri’s analysis, the correlation between NPIs and DE contexts are not arbitrary. The restricted distribution of NPIs is derived from independent properties of bhii ‘even’. The proposed analysis of Japanese one NPIs that is based on Lahiri’s predicts that one NPIs should be acceptable in DE contexts besides negative sentences. However, this prediction is not borne out, as in (77) (Nakanishi 2006). (77)

a. * inu-o [ip-piki]F-mo mi-ta subete-no otokonoko-wa nigedai-ta. dog-ACC one-CL-MO see-PAST all-GEN boy-TOP run.away-PAST ‘All the boys who saw any dog ran away.’ b.* Alan-ga inu-o [ip-piki]F-mo mi-ta-to-wa odoroi-ta. Alan-NOM dog-ACC one-CL-MO see-PAST-COMP-TOP was surprise-PAST ‘(I) was surprised that Alan saw any dog.’

The Japanese item at issue is subject to a much stronger distributional restriction than the corresponding Hindi expression. In particular, the Japanese item is licensed only by clausemate negation. This distributional restriction leads some researchers to claim that the item at issue is not an NPI, but a negative concord (Watanabe 2004). However, I showed that -mo in section 4.2 corresponds to English even or Hindi bhii, thus both Japanese and Hindi possess an expression that is composed of one and even. This crosslinguistic correspondence suggests that, if the compositional analysis of one + even is fruitful in one language, the same analysis should extend to the other language. Pursuing this line of research, in Nakanishi (2006), I accounted for the distributional restriction of one + even in Japanese as the distributional restriction of different even items in Japanese (see An 2007 for the similar analysis of the corresponding Korean item). Sentences (77) are acceptable when -mo is replaced with -demo. As shown in section 4.2.3, -demo as well as -mo corresponds to English even. The upshot of my earlier paper is that the scope of -mo is clause-bound, but not the one of -demo. In (77), to make the ScalarP of even and the semantics of one consistent, even needs to scope over every or surprise. In Japanese, -mo is clause-bound, and thus it cannot scope over these items, which accounts for why sentences (77) are infelicitous. In contrast, -demo is able to take the wide scope, and thus sentences (77) 28

with -demo are felicitous. In this way, we may be able to retain Lahiri’s compositional analysis in Japanese examples by adopting scope restrictions on even items.

6. Concluding Remarks In this paper, I presented a novel observation on semantic differences between Type I and Type II / III one NPIs in Japanese, and argued that all three types involve a component that corresponds to English even. I further showed that Lahiri’s compositional analysis accounts for why the three types of one NPIs in Japanese are unacceptable in positive contexts and also for why Type I and Type II / III differ in their interpretations. The unacceptability in positive contexts is explained as a ‘presupposition clash’: the ScalarP introduced by even contradicts with the meaning of one or one NP. The semantic difference between Type I and Type II / III is explained by their difference in focus sites: while even in Type I associates only with the cardinal predicate, even in Type II / III associates with the entire noun phrase (one plus the quantified NP). It follows that different alternatives are introduced in computing ScalarP of even, which yields different interpretations, namely, Type I ‘not … any NP’ and Type II / III ‘not … anything’. In this way, like in Lahiri’s (1998) analysis, the distribution and interpretation of one NPIs are derived from independent properties of -mo ‘even’ in the proposed analysis. As a final remark, let us discuss variations among NPIs. It is well known that NPIs are not uniform; it is possible to categorize them into different classes depending on their licensing conditions (Zwarts 1998). Especially relevant to this paper is the distinction between weak and strong NPIs. In English, strong NPIs can occur with even, while weak ones cannot, as shown in (78) and (79). (78) a. Alan didn’t (even) lift a finger to help Bill. b. Colin didn’t (even) have a single bite.

(= (43))

(79) a. Alan didn’t (*even) do anything to help Bill. b. Colin didn’t (*even) have any bite. The proposed analysis assumes that all types of Japanese one NPIs come with even, which suggests that the one NPIs are strong NPIs. It has been claimed that strong and weak NPIs are different at least in three respects. First, it is generally the case that weak NPIs are unstressed, whereas strong NPIs are stressed (Krifka 1995). Second, strong NPIs, but not weak NPIs, are negatively biased in questions (Ladusaw 1979, Heim 1984, Wilkinson 1996, Guerzoni 2003). For instance, the strong NPI in (80a) is biased toward negative in that it cannot be followed by an affirmative answer, while the weak NPI in (80b) is neutral in this respect. (80) a. Did Alan lift a finger to help Bill? b. Did Alan do anything to help Bill?

??Yes. / No. Yes. / No.

Third, strong NPIs, but not weak NPIs, require non-accidental generalizations (Linebarger 1980, Heim 1984, Guerzoni 2003). The examples in (81) show that the strong NPI is unacceptable when there is no natural relation between the relative clause and the main clause, while (82) show that the weak NPI is immune to such a restriction.

29

(81) a. Every student who had a single bite of the salad got sick. b.??Every student who had a single bite of the salad is taller than me. (82) a. Every student who had any of the salad got sick. b. Every student who had any of the salad is taller than me. Japanese one NPIs have the three characteristic properties of strong NPIs. One NPIs are focus-sensitive, as discussed in section 4.2. Regarding the second and third properties, one NPIs other than in negative contexts involve -demo instead of -mo, although both items correspond to English even (see section 5). Thus, in questions and in the restrictor of universal quantifiers, -demo appears instead of -mo. With this caveat, let us examine the question in (83). (83) seems to be negatively biased in that the speaker is expecting to hear a negative answer. In other words, the speaker seems to believe that Alan didn’t read any book. Lastly, examples (84) show that the one NPI is unacceptable when the relation between the relative clause and the main clause is merely accidental. (83) Alan-wa hon-o is-satu-demo Alan-TOP book-ACC one-CL-DEMO ‘Did Alan read even a single book?’

yon-da-no? read-PAST-Q

(84) a. Sarada-o hito-kuti-demo tabe-ta subete-no gakusei-wa salad-ACC one-CL-DEMO eat-PAST all-GEN student-TOP byooki-ni nat-ta. sick-DAT become-PAST ‘Every student who had a single bite of the salad got sick.’ b.??Sarada-o hito-kuti-demo tabe-ta subete-no gakusei-wa salad-ACC one-CL-DEMO eat-PAST all-GEN student-TOP watasi-yori se-ga takai. I-than hight-NOM high ‘Every student who had a single bite of the salad is taller than me.’ In the current analysis based on Lahiri’s (1998), it is crucial that NPIs come with even. It is predicated that the same analysis should extend to NPIs with even, namely, strong NPIs that have the three properties discussed above. A question remains as to what explains the distribution and semantic properties of weak NPIs. For example, unstressed any in English does not have even, as shown in (79), and thus the analysis for Hindi NPIs and Japanese one NPIs is inapplicable. Lastly, I would like to point out that some variations can be observed even among one NPIs. In the proposed analysis, Type II and III NPIs come with even that associates with one + NP, predicting that they are semantically equivalent. Indeed, as far as the data presented above are concerned, this prediction seems to be borne out. However, the two types differ in that idiomatic expressions can be formed with Type III one NPI, but not with Type II one NPI. For example, take the expression hitokko ‘human child’. This expression cannot be used in ordinary sentences, regardless of whether it is positive or negative, as in (85). However, Type III one NPI hotokko hito-ri in negative sentences is widely used to convey the meaning ‘nobody at all’. For example, (86a) means that Alan saw nobody at all. Note that this expression cannot be used in Type II one NPI, as in (86b).

30

(85)

* Alan-wa hitokko-o {mi-ta / mi-na-katta}. Alan-TOP human child-ACC {see-PAST / see-NEG-PAST} ‘(lit.) Alan {saw / didn’t see} a human child.’

(86) a. Type III: Alan-wa hitokko hito-ri Alan-TOP human child one-CL ‘(lit.) Alan didn’t see one human child.’ b. Type II: * Alan-wa hito-ri-no hitokko-mo Alan-TOP one-CL-GEN human child-MO

mi-na-katta. see-NEG-PAST mi-na-katta. see-NEG-PAST

Another example is given in (87a), where namida hito-tu ‘tear one-CL’ in negative contexts yields the interpretation ‘didn’t cry at all’. What is remarkable here is that namida ‘tear’ generally occurs with the classifier -teki that is used to count drops of liquid, and not with -tu, as in (88). Indeed, Type II one NPI in (87b) is unacceptable. (87) a. Type III: Alan-wa namida hito-tu Alan-TOP tear one-CL ‘(lit.) Alan didn’t show one tear.’ b. Type II: * Alan-wa hito-tu-no namida-mo Alan-TOP one-CL-GEN tear-MO (88) a. yon-teki-no namida four-CL-GEN tear ‘five drops of tear’

mis-ena-katta. show-NEG-PAST mis-ena-katta. show-NEG-PAST

b. *yot-tu-no namida four-CL-GEN tear

I do not have a satisfactory account for the idiomatic nature of Type III one NPI, but I would like to point out that English strong NPIs with a silent even (which may be optionally expressed overtly) also brings in an idiomatic flavor. For example, you can say that ‘Alan didn’t lift a finger to help Bill’, meaning ‘Alan didn’t help Bill at all’, but you cannot get this idiomatic interpretation by saying that ‘Alan didn’t lift a toe to help Bill’. Given that Type III one NPI in Japanese also comes with a silent even, we may say that a silent even is a culprit here. In the case of Type II one NPI, the obligatory presence of an even item (namely, -mo) may somehow block idiomatic interpretations.

References An, Dok-Ho. 2007. On the distribution of NPIs in Korean. Natural Language Semantics 15, 317350. Aoyagi, Hiroshi. 1994. On association with focus and scope of focus particles in Japanese. In, MIT Working Papers in Linguistics 24, 23-44. Chierchia, Gennaro. 2004. Scalar implicatures, polarity phenomena, and the syntax/pragmatics interface. In A. Belletti ed., Structures and Beyond, 39-65. Oxford: Oxford University Press. Downing, Pamela. 1996. Numeral Classifier Systems: The Case of Japanese. Amsterdam: John Benjamins. Fauconnier, Gilles. 1975a. Polarity and the scale principle. In, The Proceedings of the Chicago Linguistics Society 11, 188-199. 31

Fauconnier, Gilles. 1975b. Pragmatic scales and logical structure. Linguistic Inquiry 6, 353-375. Francescotti, Robert. 1995. EVEN: The conventional implicature approach reconsidered. Linguistics and Philosophy 18, 153-173. Fujita, Naoya. 1994. On the Nature of Modification: A Study of Floating Quantifiers and Related Constructions. Ph.D. dissertation, University of Rochester. Giannakidou, Anastasia. 2007. The landscape of EVEN items. Natural Language and Linguistic Theory 25, 39-81. Guerzoni, Elena. 2003. Why Even Ask? On the Pragmatics of Questions and the Semantics of Answers. Ph.D. dissertation, Massachusetts Institute of Technology. Hamano, Shoko. 1997. On Japanese quantifier floating. In A. Kamio ed., Directions in Functional Linguistics, 173-197. Amsterdam: John Benjamins. Heim, Irene. 1984. A note on negative polarity and downward entailingness. In C. Jones and P. Sells eds., The Proceedings of the 14th Conference of the North East Linguistic Society (NELS 14), 98-107. Herburger, Elena. 2000. What Counts: Focus and Quantification. Cambridge, MA: MIT Press. Herburger, Elena. 2003. A note on Spanish ni siquiera, even, and the analysis of NPIs. Probus 15, 237-256. Hiraiwa, Ken, and Kimiko Nakanishi. 2008. On bare indeterminates in Japanese. Manuscript. University of Victoria and University of Calgary. Inoue, Kazuko. 1978. Nihongo-no Bunpoo Kisoku [Grammar Rules in Japanese]. Tokyo: Taisyukan. Jackendoff, Ray. 1972. Semantic Interpretation in Generative Grammar. Cambridge, MA: MIT Press. Kadmon, Nirit, and Fred Landman. 1993. Any. Linguistics and Philosophy 16, 353-422. Karttunen, Lauri, and Stanley Peters. 1979. Conventional implicature. In C. K. Oh and D. A. Dinneen eds., Syntax and Semantics 11: Presuppositions, 1-55. New York: Academic Press. Kay, Paul. 1990. Even. Linguistics and Philosophy 13, 59-111. Krifka, Manfred. 1995. The semantics and pragmatics of polarity items. Linguistic Analysis 25, 1-49. Kuroda, S.-Y. 1965. Generative Grammatical Studies in the Japanese Language. Ph.D. dissertation, Massachusetts Institute of Technology. Ladusaw, William. 1979. Polarity Sensitivity as Inherent Scope Relations. PhD dissertation, University of Texas at Austin. Lahiri, Utpal. 1998. Focus and negative polarity in Hindi. Natural Language Semantics 6, 57-123. Lee, Young-Suk, and Laurence Horn. 1994. Any as indefinite plus even. Manuscript. Yale University. Linebarger, Marcia. 1980. The Grammar of Negative Polarity. PhD dissertation, Massachusetts Institute of Technology. Linebarger, Marcia. 1987. Negative polarity and grammatical representation. Linguistics and Philosophy 10, 325-387. Miyagawa, Shigeru. 1989. Structure and Case Marking in Japanese. New York: Academic Press. Nakanishi, Kimiko. 2006. Even, only, and negative polarity in Japanese. In, The Proceedings of the 16th Semantics and Linguistics Theory (SALT 16), 138-155. Nakanishi, Kimiko. 2007. Cross-linguistic variation in polarity items. In, The Symposium on Semantic/Pragmatic Perspectives on Negative Polarity Items, the 81st Linguistic Society of America (LSA) Annual Meeting, Anaheim, LA. 32

Nakanishi, Kimiko. 2008a. Scalarity of -mo. Handout. Presented at Keio University, Tokyo. Nakanishi, Kimiko. 2008b. Scope of even: A cross-linguistic perspective. In, The Proceedings of the 38th Meeting of the North East Linguistic Society (NELS 38). Nakanishi, Kimiko. 2008c. The syntax and semantics of floating numeral quantifiers. In S. Miyagawa and M. Saito eds., The Oxford Handbook of Japanese Linguistics, 286-318. Oxford: Oxford University Press. Nishigauchi, Taisuke. 1990. Quantification in the Theory of Grammar. Dordrecht: Kluwer. Numata, Yoshiko. 1992. 'Mo', 'Dake', 'Sae', etc. - Toritate ['Also', 'Only', 'Even', etc. Emphasizing]. Tokyo: Kuroshio. Rooth, Mats. 1985. Association with Focus. PhD dissertation, University of Massachusetts, Amherst. Rooth, Mats. 1992. A theory of focus interpretation. Natural Language Semantics 1, 75-116. Rullmann, Hotze. 1997. Even, polarity, and scope. In M. Gibson, G. Wiebe and G. Libben eds., Papers in Experimental and Theoretical Linguistics 4, 40-64. Selkirk, Lisa. 1984. Phonology and Syntax: The Relation between Sound and Structure. Cambridge, MA: MIT Press. Shimoyama, Junko. 2001. Wh-Constructions in Japanese. Ph.D. dissertation, University of Massachusetts at Amherst. Shimoyama, Junko. 2003. Wide scope universal NPIs in Japanese? Handout for The Seminar on Polarity and Wh-Constructions (R. Bhatt and B. Schwarz, Fall 2003). University of Texas at Austin. Shimoyama, Junko. 2006. Indeterminate phrase quantification in Japanese. Natural Language Semantics 14, 139-173. Shimoyama, Junko. 2008. Indeterminate NPIs and scope. von Stechow, Arnim. 1991. Current issues in the theory of focus. In A. von Stechow and D. Wunderlich eds., Semantik: Ein internationales Handbuch der zeitgenössischen Forschung, 804-825. Berlin: Walter de Gruyter. von Stechow, Arnim. 1996. Against LF pied-piping. Natural Language Semantics 4, 57-110. Watanabe, Akira. 2004. The genesis of negative concord: Syntax and morphology of negative doubling. Linguistic Inquiry 35, 559-612. Watanabe, Akira. 2006. Functional projections of nominals in Japanese: Syntax of classifiers. Natural Language and Linguistics Theory 24, 241-306. Wilkinson, Karina. 1996. The scope of even. Natural Language Semantics 4, 193-215. Zwarts, Frans. 1998. Three types of polarity. In F. Hamm and E. Hinrichs eds., Plural Quantification, 177-238. Dordrecht: Kluwer.

33

1. Negative Polarity Items in Japanese

even and Japanese -mo. While even can attach to a VP, as in (30a), Japanese -mo must attach to an NP or a PP even when the relevant focus site is a VP (cf.

202KB Sizes 1 Downloads 199 Views

Recommend Documents

1. Negative Polarity Items in Japanese
c. Alan-wa kooen-de inu ip-piki-o mi-ta. Alan-TOP park-at dog one-CL-ACC see-PAST .... (17) [[ even]] w(C)(p), where p = λw. introduce(a,b,c,w) and a. ...... (67) Type I: Alan-wa [iti-mai-no {pan / ??suteeki}]F-mo tab-ena-katta. Alan-TOP.

Even, only, and Negative Polarity in Japanese - Semantics Archive
Jul 9, 2006 - Two theories have been proposed to account for the most-likely reading of even in DE .... Zidane{even / even} red card-ACC got-PAST. 'Even Zidane got a red card.' Another ...... [Go-hun]F{-demo/-dake-demo} ii-kara mat-te.

Even, only, and Negative Polarity in Japanese
Jul 9, 2006 - One theory holds that this reading obtains when even takes scope over a DE operator .... Zidane{even / even} red card-ACC got-PAST .... Put it differently, the wide scope of -mo/-demo is not given for free; rather, we gain it by ...

Japanese -wa, degree questions, Negative Islands
The meaning contribution of degree related -wa is similar to that of English at least or minimally (e.g. Geurts ... 1 For simplicity, we ignore the distinction between presupposed and asserted content. 3 ..... (65) DISTRIBUTION OF Exh8. Exh only ...

Pending PNM Items (1).PDF
incentive bonus. Further, no new shops/activities can be brought under CLW pattern. of incentive scheme in the Workshops as per extant norms. 2612015: Creation of posts for TRD Organisation in Railways. WCR has already been advised vide Board's lette

Soft Syntax and the Evolution of Negative and Polarity ...
s (and others') observation that the hard syntax of voice in some languages can ..... position+widening model (-486.25 vs -229.042) and this over more degrees of freedom (5 vs. 4). .... Verb movement, universal grammar, and the structure of IP.

Negative magnetoresistance, negative ...
Apr 24, 2006 - 2Department of Electrophysics, National Chiao Tung University, Hsinchu .... with increasing RN =R(8 K) that eventually leads to insulating be-.

Polarity and Modality
1.3 Chapter 4: Neg-raising and Positive Polarity: The View from Modals . 7. 1.4 Chapter 5: ..... tual Coercion. Paper presented at the West Coast Conference on Formal Linguistics 28; .... for licensing. I call 'domain of a PI π' a constituent .... p

Onset of treelike patterns in negative streamers
Dec 14, 2012 - As an alternative approach (the one we follow in this work), the ..... limitation and the model does not take into account the energy radiated, the heat exchange, and ... Ciencia e Innovación under projects AYA2009-14027-C07-.

1 Sentence Production in Japanese Hiroko Yamashita ...
because the speaker must select elements that are appropriate for the meaning and find a way to sequence these elements in a language-appropriate way.

On Negative Emotions in Meditation.pdf
Whoops! There was a problem loading more pages. Retrying... On Negative Emotions in Meditation.pdf. On Negative Emotions in Meditation.pdf. Open. Extract.

Polarity and Modality
Lastly I would like to thank for their love my grandmother, my sister and my mother ...... 'deny' (which denotes a DE function and licenses NPIs on its own, cf. ...... offers a landing site for the modal to raise to (it is labeled XP in the above LF)

Polar initiatives and polarity particle responses in an ... -
Jan 30, 2012 - both the common theme and the primary variations found across ...... of sentences, i.e., the potential to set up discourse referents that may serve ...

Polar initiatives and polarity particle responses in an ... -
Jan 30, 2012 - both the common theme and the primary variations found across ..... that we started out with in section 2.1 we have to add one more ingredient.

Polarity particles in an inquisitive discourse model
Rather, we think of these polarity particles as realizing certain polarity features. • Polarity features are hosted by a syntactic node which we will refer to as PolP.

How to apologise in Japanese - Learn Japanese Pod
Nov 12, 2015 - so it's better to only use this with friends and not your boss or other superiors. .... Facebook: https://www.facebook.com/LearnJapanesePod.

Page 1 Inventory Control Sheet *Disposal of items' Please fill in the ...
Location of item. (Building and Room Number). Condition of item - ExCellent Fair POOr. Model Number. Serial Number. Method of Disposal -. Sold Stolen Traded-in Discarded/Destroyed. (Estimated value if known $ ). Principal/Supervisor Approval Date. Su

1 On the Negative Relationship between Initial Returns ...
on the French stock market between 1996 and 2006. This latter market offers an ideal testing ground as the IPO process in France is relatively short and, unlike in the U.S., most issuers never revise their price range, allowing to better control for