The Role of Intelligence in Nuclear Deterrence

Dov Biran [email protected] and Yair Tauman [email protected] Department of Economics SUNY at Stony Brook Stony Brook, NY 11794-4384, USA and The Leon Recanati Graduate School of Business Administration Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel

November 2, 2008

1.

Introduction

The paper analyzes the impact of intelligence in a simple model of two rival countries (players). Player 1 wishes to develop a nuclear bomb and Player 2’s aim is to frustrate Player 1’s intention, even if it requires attacking and destroying whose facilities. But before launching an attack, Player 2 wants to be convinced with high probability that his rival is indeed developing the bomb. For this purpose Player 2 operates a spying device or intelligence system (IS) of a certain precision α,

1 < α ≤ 1 , which may or may not be commonly known. In other words, the IS will detect correctly 2

the action of Player 1 with probability α . Based on the signal of the IS, Player 2 decides whether or not (or with what probability) to attack and destroy the facilities of his rival. The signal "b" indicates that Player 1 builds a bomb and the signal "nb" indicates the opposite. The preferences of the players for the four possible outcomes are as follows: Player 2’s best outcome is that Player 1 does not build a bomb and player 2 does not attack him. The second best outcome for Player 2 is that a bomb is built and destroyed. This outcome is better for her than the one where she unjustifiably attacks Player 1. And letting Player 1 build a bomb is the worst outcome for Player 2. As for Player 1, he prefers not to be attacked irrespective of his actions. He most prefers the outcome where he builds a bomb but is not attacked. His worst outcome is when he builds a bomb and it is destroyed. Note that this game is not a zero-sum-game. Several results are quite surprising. Consider first the case where α is commonly known. In equilibrium both players benefit from a higher precision IS. While it is not surprising why this is so for Player 2, the benefit to Player 1 is less clear, since 1's actions are now better monitored by 2. Moreover, it is shown that both players are better off with the IS than without it, irrespective of its precision. Actually, Player 1 has an incentive to subsidize Player 2 for building an IS which will be as accurate as possible even though it will better monitor his actions. Next, if the IS is sufficiently accurate ( α exceeds a certain threshold), Player 2 as expected will choose not to attack Player 1 if the signal is nb. But if the signal is "b" still Player 2 will not attack Player 1 with significant

The Role of Intelligence in Nuclear Deterrence

probability, even though the worst case for Player 2 is to allow Player 1 to have a bomb. On the other hand if the IS is less accurate ( α is smaller than that threshold) Player 2 will act aggressively. She will attack Player 1 with probability 1 if the signal is "b" and she will even attack him with positive probability if the signal is "nb". Nevertheless, in this region of α the probability of Player 1 building a bomb is increasing the higher is the precision of the IS, even though he is more likely to be detected. Let us provide some intuition for these results. If the precision of the IS is relatively high, Player 1 knows that if he chooses to build a bomb, Player 2 will detect it with high probability and she is likely to attack Player 1. Hence, Player 1 is better off building a bomb with a small probability. But then if the IS sends the signal "b" suggesting that Player 1 is building a bomb, the signal becomes less reliable and Player 2 hesitates to act aggressively. Indeed, if Player 2 obtains the signal "b" she attacks Player 1 with a probability which is bounded away from 1 and it decreases with the precision of the IS. On the other hand if the IS is less accurate, Player 1 builds a bomb with significant probability, knowing that there is a good chance that he will not be detected. In an attempt to avoid the worst case scenario, Player 2, conditional on the signal "b", attacks Player 1 with no hesitation. Moreover, with positive probability she attacks him even if the signal is "nb" (In this case the probability that Player 1 builds a bomb is also significant). It is also shown that in equilibrium the unconditional probability that Player 2 attacks Player 1 decreases the higher is the precision of the IS. This is in line with the result that Player 1 benefits from a higher precision IS. Finally, we analyze the case where the precision, α , of the IS is the private information of its owner, Player 2. Player 1 knows the distribution of α but not the actual value of α . We find that the equilibrium strategies are qualitatively similar to the common knowledge case but with one difference. In contrast to the common knowledge case the ex-post expected payoff of Player 1 decreases the higher is the difference between the actual and the expected precision of the IS.

The Role of Intelligence in Nuclear Deterrence

The literature offers variety of models which examine arms building, arms races, nuclear deterrence and signaling, escalation and inspection, arms control and verification, and other such specific military games. O’Neill (1994) provides an extensive survey of this literature. Let us briefly mention some models. Already in 1917 Thomas Edison presented the problem of how to get transport ships past German U-boats into British ports and defined it as a game of ambush. Most of the literature on military game theory deals with attackers and defenders. General discussions with military examples can be found in Dresher (1961, 1968), Shubik (1983, 1987), Thomas (1966), Finn and Kent (1985) and O’Neill (1993). Leonard (1992) recounts the history of military interest in game theory. Some class of papers deals with missile attack and defense. The best known work on the subject has been the Prim-Read theory which is based on Read (1957, 1961) and Karr (1981). In a simple version of the model, an attacker sends missile warheads to destroy some fixed targets, while the defender tries to protect the targets using interceptors which are themselves missiles. Another famous class of military games are allocation games and the most popular example is the Colonel Blotto game. In this game players simultaneously divide their resources among several small engagements. Borel had already provided a simple version in 1921, not long before he became France’s Minister of the Navy. Each player divides one unit among three positions and whoever has assigned a greater amount to two out of the three positions wins. Even though the description of the game is simple, some mixed strategy solutions are quite complex. To the best of our knowledge, Biran (1991) thesis is the only work which involves intelligence and surveillance. However, one should be aware that much of the American study is kept secret and placed in classified journals like: The Proceedings of the Military Operations Research Society Symposium or The Journal of Defense Research. Notice that the IS in this paper is a machine and not a decision maker, who has some relevant information about the action chosen by Player 1. An information holder can act strategically and

The Role of Intelligence in Nuclear Deterrence

may transmit partial (or full) information to Player 2, for the right price. For the value of information in strategic conflicts see Kamien, Tauman and Zamir (1990). While our model is closely related to that of Biran (1991), the context and the results are different. Unlike the existing literature, we are not concerned with the question of how the IS is actually designed and functions or how the attacker can effectively destroy the facilities of the opponent. Rather we are only concerned with the impact the IS has on the strategic behavior of the two rivals and on the equilibrium outcome of this conflict. 2.

The basic set-up

There are two players. Player 1 is potentially a bomb developer. Player 2 seeks to attack Player 1 and destroy his facilities, if 1 indeed develops a bomb. Player 2, the potential attacker, operates an Intelligence System (IS) of quality α . That is, if Player 1 chooses to build (B) a bomb IS will detect it with probability α and will then send the signal b to Player 2. It will send the wrong signal nb with probability 1 − α . Similarly, if Player 1 chooses not to build a bomb (NB), then IS will send the signal nb with probability α and the signal b with probability 1 − α . This can be described by the following tree b a IS B

1−a nb

Player 1

NB IS

b 1−a

nb a

Figure 1 Conditional on the signal sent by IS Player 2 will either attack (A) Player 1 or will not attack (NA). It is assumed that if Player 2 attacks Player 1 she will destroy his capability to build a bomb, with

The Role of Intelligence in Nuclear Deterrence

probability 1. Our basic results will not change if this probability is less than 1 but is significantly high. It is also assumed without loss of generality that α >

1 , and we first analyze the case where α 2

is common knowledge. In particular, it is known to Player 1 that Player 2 uses a spying device against him with a precision level α . The case where α =

1 is equivalent to the case where Player 2 2

does not use an IS. The following table describes the payoffs of the two players which results from their possible actions. NA

A

NB

w1 , 1

r1, r2

B

1, 0

0, w2

2 1

Figure 2 Assumption

0 < ri < wi < 1

i = 1, 2

Namely, Player 1 prefers not to be attacked irrespective of his decision to build or not to build a bomb. He most prefers the outcome where he builds a bomb and not being attacked. His worst outcome is to build a bomb and it is destroyed. Player 2’s best outcome is that Player 1 does not build a bomb and is not attacked. The second best outcome for her is that Player 1 builds a bomb and it is destroyed. This outcome is better for her than the one where she unjustifiably attacks Player 1. The worst case for Player 2 is to allow Player 1 to have a bomb. In the presence of IS, Player 2 has four pure strategies. A pure strategy of Player 2 is a pair ( x, y ) where both x and y are in {NA, A} and where x is the action of Player 2 if she observes the signal

nb and y is her action if she observes the signal b. The strategic form of the game between the two players is described in Figure 3 and is derived from Figure 1 and Figure 2, above.

The Role of Intelligence in Nuclear Deterrence

2

(NA,

(NA, A)

(A, NA)

(A, A)

1

NA)

NB

w1 , 1

α w1 + (1 − α ) r1 , α + (1 − α ) r2

(1 − α ) w1 + α r1 , 1 − α + α r2

r1 , r2

B

1, 0

1 − α , α w2

α , (1 − α ) w2

0, w2

Figure 3 (The game Gα )

For instance, the strategy (NA, A) of Player 2 is not to attack Player 1 if the signal is nb and to attack him if the signal is b. The strategy (A, A) is to attack Player 1 irrespective of the signal. We denote this game by Gα . 3.

The equilibrium analysis of Gα where α is commonly known

In this section we assume that α is commonly known to the two players. Under this assumption we compute the Nash equilibrium of Gα . In the next section we will analyze the case where α is the private information of Player 2, the operator of IS. Suppose first that

1 <α <1 . 2

We note from Figure 3 that the strategy (A, NA) of Player 2 is strictly dominated by her strategy (NA, A), since α >

1 . Therefore the strategy (A, NA) can be eliminated. The resulting game is 2

The Role of Intelligence in Nuclear Deterrence

(NA, NA)

(NA, A)

(A, A)

p

2 1 NB

w1 , 1

α w1 + (1 − α ) r1 , α + (1 − α ) r2

r1 , r2

1-p

B

1, 0

1 − α , α w2

0, w2

Figure 4

α =

Let

1 − r1 1 − r1 + w1

The term α is crucial for the analysis. We assume first that α ≠ α . The cases α = α and α = 1 will be treated separately.

Proposition 1 Let

1 < α < 1 and suppose that α ≠ α then the game Gα has a unique Nash 2

equilibrium. (1) If α < α , Player 2 will attack Player 1 if the signal is b and will randomize between attacking and not attacking if the signal is nb. (2) If α > α Player 2 will not attack Player 1 if the signal is nb and will randomize between attacking and not attacking if the signal is b. Player 1 always randomizes between building and not building the bomb. (3) The expected payoffs of both players increase in α , for all

α for

1 < α < 1 . (4) The probability that Player 1 builds a bomb increases in 2

1 < α < α and decreases in α for α < α ≤ 1 . 2

Proof see Appendix. For

1 < α < α the equilibrium strategies are 2

where

The Role of Intelligence in Nuclear Deterrence

(( pˆ , 1 − pˆ ), (0, qˆ, 0, 1 − qˆ ))

Pr ob ( NB ) = pˆ =

(1 − α ) w2 (1 − α ) w2 + α (1 − r2 )

↓ α

and qˆ =

r1 . 1 − α (1 + w1 − r1 )

For α < α < 1 they are (( p * , 1 − p * ), (q * , 1 − q * , 0, 0)) where Pr ob ( NB ) = p* =

α w2 ↑ α α w2 + (1 − α )(1 − r2 )

and q* =

α (1 − r1 + w1 ) − (1 − r1 ) . α (1 − r1 + w1 ) − (w1 − r1 )

The expected payoffs are

(1 − α ) r1 ⎧ ⎪1 − α (1 + w − r ) 1 1 ⎪ Π1 = ⎨ ⎪ (2 α − 1) w1 + (1 − α ) r1 ⎪ ⎩ α (1 − r1 + w1 ) − (w1 − r1 )

,

1 <α <α 2

,

α <α <1

and ⎧ (1 − α ) w 2 r2 + α w 2 (1 − r2 ) ⎪ (1 − α ) w + α (1 − r ) 2 2 ⎪ Π2 = ⎨ ⎪ α w2 ⎪ ⎩ α w 2 + (1 − α ) (1 − r2 )

,

1 <α <α 2

,

α <α <1

The results of Proposition 1 appear counter intuitive. Let us provide some intuition for these results. Suppose first that α < α < 1 . In this case Player 1 knows that with relatively high probability his actions will be correctly detected by IS. Thus, he chooses to build a bomb with low probability, which decreases to zero as α increases to 1. Consequently, for a large α Player 2 does not expect

The Role of Intelligence in Nuclear Deterrence

the signal b and when it appears Player 2 updates his belief about the probability that Player 1 chose B. In equilibrium

Pr ob ( B b ) =

1 − r2 , 1 − r2 + w2

irrespective of α , α < α < 1 . This probability is bounded away from 1, inducing Player 2 to act cautiously and attack Player 1 with a probability smaller than 1 even if the signal is b. Since 2 expects the signal nb for large α , she will rely on the IS when the signal is indeed nb and she will not attack 1 with certainty. Suppose next that

1 < α < α . In this case the IS is not very reliable and 1 builds a bomb with 2

a significant probability knowing that there is a good chance that he will not be detected. In an attempt to avoid the worst case scenario, Player 2 attacks Player 1 with no hesitation when the signal is b. But since the probability that 1 builds a bomb conditional on the signal nb is also quite significant, Player 2 will attack Player 1 with positive probability even when the signal is nb. It can be verified that in equilibrium

Pr ob ( B b ) =

and it increases for

α 2 (1 − r2 ) α 2 (1 − r2 ) + (1 − α ) w2 2

1 < α < α . Hence 2

Pr ob ( B b ) >

1 − r2 1 − r2 + w2

and this probability is greater than in the case where the IS is relatively reliable (α > α ) . Note also that

Pr ob ( B n b, α < α ) =

The Role of Intelligence in Nuclear Deterrence

1 − r2 = Pr ob ( B n b, α > α ) 1 − r2 + w2

The next non intuitival result is that Player 1 builds a bomb with a probability which increases in α for

1 1 holds iff w1 + r1 < 1 . < α < α . To understand this result note first that the inequality α > 2 2

Secondly note that by choosing B (and not NB) Player 1 loses r1 if Player 2 attacks him, and gains

1 − w1 if Player 2 does not attack him. Since w1 + r1 < 1 the gain 1 − w1 exceeds the loss r1 . Since the probability that Player 2 attacks Player 1 decreases with α (see Proposition 2, below) Player 1 chooses B with a probability which increases in α . Proposition 1 also asserts that the expected payoffs of both players increase in α ,

1 < α ≤ 1 . This is 2

quite intuitive with regard to Player 2 who operates the IS against Player 1. The more accurate the IS is, the better is the information Player 2 has on the action of Player 1, and she can respond more effectively. It is less obvious why Player 1 benefits from a better quality IS. The next proposition provides a clue.

Proposition 2 In equilibrium the unconditional probability of an attack on Player 1 decreases with α . Proof See Appendix For α < α < 1 the intuition is simple. The higher is the precision α the lower is the probability that Player 1 develops a bomb and the lower is the probability that Player 2 attacks Player 1. Consider next the case where

1 < α < α . Player 2 attacks Player 1 with probability 1 if the 2

signal is b and attacks Player 1 with probability 1 − qˆ if the signal is nb. The probability that the IS sends the signal b is increasing in α since Player 1 chooses B with a probability which increases in

α . On the other hand both the probability of 1 choosing NB and 1 − qˆ decrease in α . Consequently Pr ob ( n b ) (1 − qˆ ) decreases in α and it decreases faster than the increase in α of Pr ob ( b ) .

Let us now deal with the two cases α = α

and α = 1 . In the first case where α = α , it can

be easily shown that in equilibrium Player 1 mixes his two strategies, to build or not to build a bomb

The Role of Intelligence in Nuclear Deterrence

- both with positive probabilities. Player 2 chooses a pure strategy where with certainty she attacks Player 1 if the signal is b, and will not attack Player 1 if the signal is nb. Suppose next that α = 1 (a perfect IS). It can be easily verified that the only equilibrium outcome is the one where Player 1 does not build a bomb and Player 2 does not attack. This is the best outcome for Player 2 and the second best outcome for Player 1. We close the section with a comparison of the above results with the case where Player 2 does not operate an IS system against Player 1 (and this is commonly known). In this case, both players play the game described in Figure 2. The next proposition states this comparison.

Proposition 3 Both players 1 and 2 are better off when Player 2 operates an IS against Player 1, irrespective of α ,

1 < α < 1. 2

Proof See Appendix.

4.

Asymmetric Information about the Precision of IS

In this section we assume that the precision α of IS is the private information of its owner, Player 2. Player 1 who does not know α assigns a continuous density probability f (α ) > 0 to every α , 1 ≤ α ≤ 1 and 2

1

∫ f (α ) dα = 1 . In other words, Player 2 knows the game Gα

which is actually being

1 2

played while Player 1 does not know what game Gα is being played but knows that α is chosen according to f (α ) , and this is commonly known. Let u1 and u 2 be the utilities of the two players from the various outcomes. As in the previous section (see Figure 2), it is assumed that

u1 (B, A) = 0

u 2 (B, A) = w2

u1 (B, NA) = 1

u 2 (B, NA) = 0

u1 ( NB, A) = r1

u 2 ( NB, A) = r2

u1 ( NB, NA) = w1 u 2 ( NB, NA) = 1

The Role of Intelligence in Nuclear Deterrence

Suppose that Player 1 chooses B with probability p and NB with probability 1 − p .

αp

b α

N

1− a

B n

p

(1− α ) p

1 1− p

NB

(1− α ) (1− p )

b 1− a

a

N n

α (1− p )

Figure 5 The probability that Player 2 of type α (namely Player 2 who knows that the precision of the IS is

α ) assigns to the event that Player 1 plays B after observing the signal b is

Pr ob2 (B α , b ) =

α p α p + (1 − α ) (1 − p )

Similarly Pr ob2 (N B α , b ) = Pr ob2 (B α , nb ) =

(1 − α ) (1 − p ) α p + (1 − α ) (1 − p )

(1 − α ) p (1 − α ) p + α (1 − p )

Pr ob2 (NB α , nb ) =

α (1 − p ) (1 − α ) p + α (1 − p )

Let Π 2 ( A α , b ) be the expected payoff of Player 2 of type α if the signal is b and if she attacks Player 1. Then

The Role of Intelligence in Nuclear Deterrence

Π 2 (A α , b ) = Pr ob2 (B α , b ) u 2 (B, A) + Pr ob2 (NB α , b ) u 2 ( NB, A) = =

α p w2 + (1 − α ) (1 − p ) r2 α p + (1 − α ) (1 − p )

Π 2 (A α , nb ) =

(1 − α ) p w2 + α (1 − p ) r2 (1 − α ) p + α (1 − p )

(1)

Similarly

Π 2 (NA α , b ) =

α

Π 2 (NA α , nb ) =

(2)

(1 − α ) (1 − p ) p + (1 − α ) (1 − p )

(3)

α (1 − p ) (1 − α ) p + α (1 − p )

(4)

Given p , by (1) and (3), Player 2 of type α who receives the signal b prefers A on NA iff

α pw2 + (1 − α ) (1 − p ) r2 > (1 − α ) (1 − p ) This is equivalent to

α>

(1 − p ) (1 − r2 ) ≡ λ ( p) p w2 + (1 − p ) (1 − r2 )

(5)

That is, if α > λ ( p ) Player 2 of type α who receives the signal b will choose A. He will choose NA if α < λ ( p ) . Player 2 of type α = λ ( p ) is indifferent between choosing A and NA. Notice that

λ ( p ) is decreasing in p and 1 ≤ λ ( p) ≤ 1 2

iff

0≤ p≤

1 − r2 1 − r2 + w2

Similarly, Player 2 of type α who receives the signal nb prefers A on NA iff α (1 − p ) r2 + (1 − α ) pw 2 > α (1 − p )

or equivalently

The Role of Intelligence in Nuclear Deterrence

(6)

α<

pw 2 = 1 − λ ( p) pw 2 + (1 − p ) (1 − r2 )

We can write now the best reply strategy of Player 2 of type α as a function of the signal she receives. ⎧ ⎪A ⎪ ⎪ ⎪ ⎪N A ⎪⎪ s 2 (b α , p ) = ⎨ ⎪ ⎪A ⎪ ⎪ ⎪ ⎪any strategy ⎪⎩

p>

1 − r2 , 1 − r2 + w 2

1 <α ≤1 2

p≤

1 − r2 , 1 − r2 + w 2

1 < α < λ ( p) 2

(7) p≤

1 − r2 , 1 − r2 + w 2

λ ( p) < α ≤ 1

p≤

1 − r2 , 1 − r2 + w 2

α = λ ( p)

Let μ ( p ) = 1 − λ ( p ) . Then by (2) and (4) ⎧ ⎪ NA ⎪ ⎪ ⎪ ⎪A ⎪⎪ s 2 (nb α , p ) = ⎨ ⎪ ⎪ NA ⎪ ⎪ ⎪ ⎪any strategy ⎪⎩

p<

1 − r2 , 1 − r2 + w 2

1 <α ≤1 2

p≥

1 − r2 , 1 − r2 + w 2

1 < α < μ ( p) 2

(8) 1 − r2 p≥ , 1 − r2 + w 2 p≥

1 − r2 , 1 − r2 + w 2

μ ( p) < α ≤ 1 α = μ ( p)

1

Let E (α ) = ∫ α f (α ) d α be the expected value of α . Namely E (α ) is the expected quality of the IS 1 2

from the perspective of the uninformed Player 1.

The Role of Intelligence in Nuclear Deterrence

Proposition 4 Suppose that E (α ) ≠ α . Then the game has a unique perfect Bayesian equilibrium. (1)

If E (α ) < α , there exists p1 ,

1 − r2 < p1 < 1 such that Player 1 builds the bomb with 1 − r2 + w2

probability p1 . If the signal is b, Player 2 attacks Player 1 irrespective of the precision α of IS. If the signal is nb, Player 2 attacks Player 1 if α < μ ( p1 ) and does not attack if α > μ ( p1 ) . (2)

If E (α ) > α , there exists p 2 , 0 < p 2 <

1 − r2 , such that Player 1 builds the bomb with 1 − r2 + w2

probability p 2 . If the signal is b, Player 2 attacks Player 1 if α > λ ( p 2 ) and does not attack Player 1 if α < λ ( p 2 ) . If the signal is nb, Player 2 does not attack Player 1 irrespective of α . Proof Appears in the Appendix. Unlike the case where α is commonly known, the equilibrium strategy of Player 2 is a pure action (attack or not attack with probability 1) as a function of α . The action of Player 2 depends on both the expected and the actual precision of IS. If the expected precision of IS does not exceed α , Player 2 following the signal b will attack Player 1 irrespective of the actual precision. Furthermore, Player 2 will attack Player 1 even if she receives the signal nb and if the actual precision α is relatively small (α < μ ( p1 )) , otherwise she will not attack Player 1. If on the other hand the expected precision of IS exceeds α , Player 2 following the signal b will attack Player 1 if the actual precision is relatively high (α > λ ( p 2 )) and will not attack Player 1, otherwise. This result is quite consistent with the case where α is commonly known. In this case the actual precision and the expected precision are the same. If it does not exceed α , Player 2, following the signal b, will attack Player 1 with probability 1. If it exceeds α , in the complete information case Player 2 following b mixes her two actions A and NA while in the incomplete information case the mixing is done via the various types of Player 2. She will attack iff α > λ ( p 2 ) .

The Role of Intelligence in Nuclear Deterrence

Next let us analyze the expected payoffs of the two players. Let π 2 (α ) be the expected payoff of Player 2 when the precision of the IS is α . We show that π 2 (α ) does not depend on α for relatively small α ’s and then after it is strictly increasing.

Proposition 5 (1)

(

)

Suppose that E (α ) < α . Then for all α in the interval 1 2 , μ ( p1 ) Player 2 attacks Player 1,

irrespective of the signal sent by the IS, and π 2 (α ) remains unchanged in this interval. On the other hand, for all α in (μ ( p1 ), 1) , π 2 (α ) is strictly increasing. (2)

(

)

Suppose that E (α ) > α . Then for all α in the interval 1 2 , λ ( p 2 ) Player 2 does not attack

Player 1, irrespective of the signal sent by the IS, and π 2 (α ) remains unchanged in this interval. For all α in (λ ( p 2 ), 1) , π 2 (α ) is strictly increasing. Proof See Appendix. Proposition 5 asserts that Player 2 can’t be hurt from a better precision IS and she strictly benefits from high levels of α . This is quite consistent with the complete information case. In case ⎛

E (α ) < α , with relatively high probability Player 1 builds a bomb ⎜ p1 >



1 − r2 ⎞ ⎟ and induces 1 − r2 + w2 ⎠

Player 2 to attack him. Hence, if the precision of the IS is relatively low (α < μ ( p1 )) Player 2 will ignore the signal of IS and will attack Player 1. However, if the precision of the IS is relatively high

(α > μ ( p1 )) , Player 2 will refrain from attacking Player 1 if she obtains the signal nb, but will attack Player 1 if she obtains the signal b. Similarly, if E (α ) > α then with high probability Player 1 does ⎛

not build the bomb ⎜⎜ p 2 < ⎝

1 − r2 1 − r2 + w 2

⎞ ⎟⎟ and hence Player 2 will ignore the signal b if its precision is ⎠

relatively low (α < λ ( p 2 )) . However, if the precision is relatively high (α > λ ( p 2 )) , Player 2 will attack Player 1, if she obtains the signal b and will not attack Player 1, if she obtains the signal nb. While in the complete information case Player 1 is better off the higher is the precision of the IS (Proposition 1, above), it is not always the case in the incomplete information case, as stated in the next proposition.

The Role of Intelligence in Nuclear Deterrence

For the analysis of the ex-post payoff π 1 (α ) of Player 1 we consider the special case where w1 = w2 and r1 = r2 . The analysis of the general case is complicated.

Proposition 6 Suppose that r1 = r2 and w1 = w2 . (1)

If E (α ) > α there exists λ ,

1 1 < λ < 1 , such that π 1 (α ) remains unchanged for < α < λ 2 2

and π 1 (α ) is strictly increasing for λ < α < 1 . (2)

If E (α ) < α there exists μ ,

1 1 < μ < 1 , such that π 1 (α ) remains unchanged for < α < μ 2 2

and π 1 (α ) is strictly decreasing for μ < α < 1 . Proof See Appendix. Suppose that E (α ) > α . In this case Player 1 assigns high probability that the precision of the IS is

⎛1 ⎞ high and builds a bomb with a low probability. Hence, for small values of α ⎜ < α < λ ⎟ Player 2 2 ⎝ ⎠ ignores the signal b and irrespective of the signal does not attack Player 1. In this case π 1 (α ) remains unchanged. However, for higher values of α

( λ < α < 1)

Player 2 follows the signal of the

IS and attacks Player 1 iff the signal is b. This case is consistent with the case where α is commonly known as π 1 (α ) is increasing in α . Hence Player 1 correctly believes that with high probability IS is highly accurate. Suppose next that E (α ) < α . In this case Player 1 builds a bomb with high probability.

⎛1 ⎞ Hence, for small values of α ⎜ < α < μ ⎟ Player 2 ignores the signal nb and attacks Player 1, ⎝2 ⎠ irrespective of the signal. Consequently π 1 (α ) remains unchanged for all α , values of α

1 < α < μ . For higher 2

( μ < α < 1) Player 2 relies on the IS and attacks Player 1 iff the signal is b. Here, the

higher is α the higher is the mistake of Player 1 who builds a bomb with high probability since he

The Role of Intelligence in Nuclear Deterrence

incorrectly believes with high probability that α is small. Consequently, the ex-post payoff π 1 (α ) of Player 1 decreases with α , μ < α < 1 . Let us provide more details for this case. By Proposition 4 if E (α ) > α then the probability that Player 1 builds a bomb is p1 and

p1 >

1 − r2 1 − r1 = 1 + w2 − r2 1 + w1 − r1

If α > μ ( p1 ) Player 2 will follow the advice of the IS and will attack 1 iff the signal is b. Hence with probability p1 α Player 1 builds a bomb and Player 2 attacks him. In addition with probability

(1 − p1 ) (1 − α ) Player 1 does not build a bomb, the IS sends the wrong signal b and again Player 1 is attacked. On the other hand, Player 1 is not attacked with probability p1 (1 − α ) (where he builds a bomb but IS sends the signal nb) plus (1 − p1 ) α (where he does not build a bomb and IS correctly detects it). Hence the expected payoff of Player 1 is

π 1 (α ) = p1 α ⋅ 0 + (1 − p1 ) (1 − α ) r1 + p1 (1 − α ) 1 + (1 − p1 ) α w1 or

π 1 (α ) = − α ⎡⎣ p1 (1 + w1 − r1 ) − ( w1 − r1 ) ⎤⎦ + (1 − p1 ) ( w1 − r1 ) and it is decreasing in α iff

p1 > The last inequality certainly holds since p1 >

The Role of Intelligence in Nuclear Deterrence

w1 − r1 1 + w1 − r1

1 − r1 and w1 < 1 . 1 + w1 − r1

References 1.

Biran, D. (1991) “Distributed Information Systems Under Strategic Conflict", Ph.D. Thesis, Tel Aviv University.

2.

Dresher, M. (1961) Some Military Application of the Theory of Games. P-1849, RAND Corporation, Santa Monica, 597-604, in: Proceedings of the Second International Conference on Operations Research. New York: Wiley.

3.

Dresher, M. (1968) Mathematical Models of Conflict, 228-239 in: E. Quade, Boucher, eds. Systems Analysis and Policy Planning: Applications in Defense. New York: Elsevier.

4.

Finn, M. and G. Kent (1985) Simple Analytical Solutions to Complex Military Problems. N-2211, RAND Corporation, Santa Monica.

5.

Kamien, M., Y. Tauman and S. Zamir (1990) "The Value of Information in a Strategic Conflict", Games and Economic Behavior, 2, 129-153.

6.

Karr, A. (1981) Nationwide Defense Against Nuclear Weapon: Properties of Prim-Read Deployments', P-1395, Institute for Defense Analyses, Alexandria, VA.

7.

Leonard, R. (1992) Creating a Context for Game Theory, in R. Weintraub, ed. Towards a History of Game Theory, Durham: Duke University Press.

8.

O'Neill, B. (1993) Operations Research and Strategic Nuclear War, International Military Defense Encyclopedia. Pergammon-Brassey.

9.

O'Neill, B. (1994) Game Theory Models of Peace and War, Handbook of Game Theory, Volume 2, edited by R.J. Aumann and S. Hart, Elseivier Science B.V.

10.

Read, T. (1957) Tactics and Deployment for Anti-Missile Defenses, Bell Telephone Laboratories. Whippany, N.J.

11.

Read, T. (1961) Strategy for active defense, American Economic Review, 51: 465-471.

12.

Shubik, M. (1983) Game Theory, the Language of Strategy, 1-28 in: M. Shubik, ed. Mathematics of Conflict, Amsterdam: Elsevier.

13.

Shubik, M. (1987) The Uses, Value and Limitations of Game-Theoretic Methods in Defense Analysis, 53-84 in: C. Schmidt and F. Blackaby, eds. Peace, Defense and Economic Analysis. New York: Sy. M.'s Press.

14.

Thomas, C. (1996) Some Past Applications of Game Theory to Problems of the United States Air Force, 250-267 in: A. Mensch, ed. Theory of Games: Techniques and Applications, New York: American Elsevier.

The Role of Intelligence in Nuclear Deterrence

Appendix Proof of Proposition 1 Observe first that Gα has no equilibrium in pure strategies (since 0 < α < 1, 0 < r1 < 1 and 0 < w1 < 1 ). Also since α ≠ α there is no equilibrium where one player plays a pure strategy whiles

the other player plays a mixed strategy. We consider four cases.

Case 1 Player 2 mixes only the two pure strategies (NA, NA) and (NA, A) and assigns zero probability to the pure strategy (A, A). The resulting game is given in Figure 5.

2

(NA, NA)

(NA, A)

1 p

NB

w1 , 1

α w1 + (1 − α ) r1 , α + (1 − α ) r2

1-p

B

1, 0

1 − α , α w2

Figure 6 This case is relevant only if α > α =

1 − r1 . Indeed if α < α then NB is strictly dominated by 1 − r1 + w1

B and Player 1 will not mix his two pure strategies but rather will choose B with probability 1, a contradiction1 . Let α > α and suppose that Player 1 chooses the mixed strategy (p, 1-p) where

0 < p < 1 . If Player 2 chooses (NA, A), she obtains (see Figure 6) an expected payoff of p (α + (1 − α ) r2 ) + (1 − p ) α w2 , while her expected payoff is p if she chooses (NA, NA). Since 2

mixes these two pure strategies it must be that she obtains the same payoff whether she chooses (NA, A) or (NA, NA). Namely

1

This contradicts the fact that Gα does not have a pure strategy equilibrium.

The Role of Intelligence in Nuclear Deterrence

p = p (α + (1 − α ) r2 ) + (1 − p ) α w2

(1A)

Solving for p we obtain p* =

α w2 α w2 + (1 − α ) (1 − r2 )

(2A)

and the payoff of Player 2 is p * . It is left to check that Player 2 has no incentive to deviate to (A, A).

(

)

If she deviates to (A, A), she obtains (see Figure 4) r2 p * + w2 1 − p * . By (2A)

(

)

r2 p * + w2 1 − p * =

α w2 + w2 (1 − α ) (1 − r2 ) α w2 + (1 − α ) (1 − r2 )

(3A)

Player 2 has no incentive to deviate if

(

)

r2 p * + w2 1 − p * ≤ p *

By (2A) and (3A) this is equivalent to

(1 − 2 α ) (1 − r2 ) ≤ 0 which definitely holds since r2 < 1 and α >

1 . 2

Suppose next that Player 2 mixes her two pure strategies (NA, NA) and (NA, A) with probabilities

q and 1 − q respectively, where 0 < q < 1 . If Player 1 chooses the strategy NB, he obtains (see Figure 4) q w1 + (1 − q ) (α w1 + (1 − α ) r1 )

and if he chooses B he obtains q + (1 − q ) (1 − α ) .

Since 0 < p * < 1 it must be that

q w1 + (1 − q ) (α w1 + (1 − α ) r1 ) = q + (1 − q ) (1 − α ) Solving for q we have q* =

The Role of Intelligence in Nuclear Deterrence

α (1 − r1 + w1 ) − (1 − r1 ) α (1 − r1 + w1 ) − (w1 − r1 )

Since α > α it follows that 0 < q * < 1 . We summarize the above in the following lemma. Suppose that α < α < 1 . Then the game Gα has the following equilibrium point:

Lemma 1

(

)

(

)

Player 1 plays p * , 1 − p * and Player 2 plays q * , 1 − q * 0, 0 where

p* =

α w2 α w2 + (1 − α ) (1 − r2 )

q* =

α (1 − r1 + w1 ) − (1 − r1 ) α (1 − r1 + w1 ) − (w1 − r1 )

Player 1’s expected payoff is Π 1* =

(2 α − 1) w1 + (1 − α ) r1 α (1 − r1 + w1 ) − (w1 − r1 )

Player 2’s expected payoff is Π *2 = p * =

α w2 α w2 + (1 − α ) (1 − r2 )

Also, the expected payoffs of both players increase in α . Case 2 Player 2 mixes only the two pure strategies (NA, NA) and (A, A) and assigns zero

probability to (NA, A). The resulting game is (NA, NA)

(A, A)

p

1 NB

2

w1 , 1

r1 , r2

1-p

B

1, 0

0, w2

Figure 7

Again, suppose that 1 mixes his two pure strategies NB and B and plays ( p, 1 − p ) . If Player 2 chooses (NA, NA), she obtains p and if she chooses (A, A) she obtains p r2 + (1 − p ) w2 . Since Player 2 mixes these two strategies

The Role of Intelligence in Nuclear Deterrence

p = p r2 + (1 − p ) w2 . Solving for p we obtain ~ p=

w2 1 − r2 + w2

(4A)

and the expected payoff of Player 2 is ~ p . If 2 deviates to (NA, A), she obtains (see Figure 4) ~ p [α + (1 − α ) r2 ] + (1 − ~ p ) α w2 .

By (4A) this payoff is

π2 =

(2 α − 2 α r2 + r2 ) w2

(5A)

1 − r2 + w2

By (4A) and (5A) she benefits from this deviation iff 2 α − 2 α r2 + r2 > 1

Or equivalently iff (2 α − 1) (1 − r2 ) > 0 . This certainly holds since α >

1 and r2 < 1 . Hence, Player 2

2 has an incentive to deviate to (NA, A). We conclude that there is no equilibrium of Gα where Player 2 mixes her two pure strategies (NA, NA) and (A, A) only. Case 3 Player 2 only mixes her two pure strategies (NA, A) and (A, A).

2 1

(NA, A)

(A, A)

p

NB

α w1 + (1 − α ) r1 , α + (1 − α ) r2

r1 , r2

1-p

B

1 − α , α w2

0, w2

Figure 8

The Role of Intelligence in Nuclear Deterrence

This case is relevant only if α < α =

1 − r1 . If α > α , then B is strictly dominated by NB and 1 − r1 + w1

Player 1 will rather play NB with probability 1. In this case 2’s best reply is (NA, A) also with probability 1, contradicting the fact that Gα has no pure strategy equilibrium. Assume therefore that

α < α . If Player 2 chooses (NA, A), she obtains p [α + (1 − α ) r2 ] + (1 − p ) α w2

If she chooses to play (A, A), she obtains

p r2 + (1 − p ) w2 . Hence, in equilibrium, p [α + (1 − α ) r2 ] + (1 − p ) α w2 = p r2 + (1 − p ) w2

Solving for p we obtain pˆ =

(1 − α ) w2 (1 − α ) w2 + α (1 − r2 )

(6A)

Player 2 obtains

ˆ = r pˆ + w (1 − pˆ ) Π 2 2 2 By (6A) ˆ 2 = (1 − α ) w2 r2 + α (1 − r2 ) w2 Π (1 − α ) w2 + α (1 − r2 )

(7A)

If Player 2 deviates to (NA, NA), she obtains (see Figure 4) pˆ . By (6A) and (7A) Player 2 will not deviate iff

(1 − α ) w2 r2 + α w2 (1 − r2 ) ≥ (1 − α ) w2 Equivalently

α w2 (1 − r2 ) ≥ (1 − α ) w2 (1 − r2 ) ,

The Role of Intelligence in Nuclear Deterrence

which certainly holds since r2 < 1 and α >

1 . Therefore, Player 2 has no incentive to deviate. Let 2

us compute the mixed strategy equilibrium (0, q, 0, 1 − q ) of Player 2. Note that if Player 1 chooses the strategy NB he obtains q [α w1 + (1 − α ) r1 ] + (1 − q ) r1

and if he chooses the strategy B he obtains (1 − α ) q . Hence, in equilibrium, q [α w1 + (1 − α ) r1 ] + (1 − q ) r1 = (1 − α ) q

Solving for q we have qˆ =

r1 1 − α (1 + w1 − r1 )

and 0 < qˆ < 1 since α < α . Also, Player 1 obtains ˆ = Π 1

(1 − α ) r1 1 − α (1 + w1 − r1 )

We summarize the above in the next lemma Lemma 2 Suppose that

1 < α < α . Then, the game Gα has the following equilibrium point: Player 2

1’s strategy is ( pˆ , 1 − pˆ ) and Player 2’s strategy is (0, qˆ , 0, 1 − qˆ ) where pˆ =

(1 − α ) w2 (1 − α ) w2 + α (1 − r2 )

qˆ =

r1 1 − α (1 + w1 − r1 )

The expected payoffs of the two players are ˆ = Π 1

The Role of Intelligence in Nuclear Deterrence

(1 − α ) r1 1 − α (1 + w1 − r1 )

ˆ2= Π

(1 − α ) w2 r2 + α w2 (1 − r2 ) (1 − α ) w2 + α (1 − r2 )

ˆ and Π ˆ increase in α in the region 1 < α < α . and both Π 1 2 2 Case 4 Player 2 mixes the three strategies (NA, NA), (NA, A) and (A, A). By cases 1 and 3 we must

have that p * = pˆ where p * and pˆ are given by (2A) and (6A). It can be easily verified that p * = pˆ iff α =

1 1 . Under the assumption that α > we have no equilibrium in this case. 2 2

The proof of Proposition 1 is thus complete. „ 1 Proof of Proposition 2 Consider first the case < α < α . By Lemma 1 Player 2 attacks Player 1 2

with probability 1 if the signal is b, and with probability 1 − qˆ if the signal is nb. Hence the probability that Player 1 is attacked is Pr ob ( A) = Pr ob (b ) + (1 − qˆ ) Pr ob (n b ) nb

pˆ α

α

IS NB

b



1−α

pˆ (1 − α )

Player 1

1 − pˆ B

nb

(1 − pˆ )(1 − α )

1−α

IS

α b

(1 − pˆ ) α Figure 9

The Role of Intelligence in Nuclear Deterrence

Using Figure 9 we have Pr ob (b ) = pˆ (1 − α ) + (1 − pˆ ) α =

(1 − α )2 w2 + α 2 (1 − r2 ) (1 − α ) w2 + α (1 − r2 )

and Pr ob (n b ) = pˆ α + (1 − pˆ ) (1 − α ) =

α (1 − α ) (1 + w2 − r2 ) (1 − α ) w2 + α (1 − r2 )

Since 1 − qˆ =

1 − r1 − α (1 + w1 − r1 ) 1 − α (1 + w1 − r1 )

we have

Pr ob ( A) =

(1 − α )2 w2 + α 2 (1 − r2 ) + (1 − α ) w2 + α (1 − r2 ) +

α (1 − α ) (1 + w2 − r2 ) (1 − r1 − α (1 + w1 − r1 ) ) [(1 − α ) w2 + α (1 − r2 )] [1 − α (1 + w1 − r1 )]

It can be shown (after rearranging terms) that Pr ob ( A) = 1 −

α (1 − α ) r1 (1 + w2 − r2 ) [ 1 − α (1 + w1 − r1 )] [w2 + α (1 − w2 − r2 )]

It is therefore sufficient to prove that

f (α ) ≡

α (1 − α ) (1 − α a ) (w2 + α b )

Increases in α , where a = 1 + w1 − r1 and b = 1 + w2 − r2 . It can be verified that f ′ (α ) > 0 iff − α 2 b + α 2 a b + α 2 a w2 + w2 − 2 α w2 > 0

equivalently,

(

)

L ≡ α 2 b (a − 1) + w2 α 2 a − 2 α + 1 > 0

The Role of Intelligence in Nuclear Deterrence

Since a > 1

(

)

L > w2 α 2 − 2 α + 1 = w2 (1 − α ) ≥ 0 2

and hence f ′ (α ) > 0 as claimed. Next assume that α < α < 1 . By Lemma 1 and Figure 8 (replacing pˆ by p * )

(

)[

(

) ]

Pr ob ( A) = Pr ob ( A b ) P (b ) = 1 − q * p * (1 − α ) + 1 − p * α =

=

(1 − w1 ) (1 + w2 − r2 ) α (1 − α ) [α (1 − r1 + w1 ) − (w1 − r1 )] [α w2 + (1 − α ) (1 − r2 )]

We need to prove that

g (α ) ≡

α (1 − α ) (α A − B ) (− α C + D )

decreases in α , where A = 1 − r1 + w1 , B = w1 − r1 , C = 1 − w2 − r2 , and D = 1 − r2 . It can be verified that g ′ (α ) < 0 iff − α 2 w2 − (1 − α ) (1 − r2 ) (w1 − r1 ) < 0 . Thus g ′ (α ) < 0 as claimed. 2

„

Proof of Proposition 3 The equilibrium strategies of the players in the game described in Figure 2 where there is no use of an IS are ~ p=

w2 1 − r2 + w2

q~ =

r1 1 + r1 − w1

and

The Role of Intelligence in Nuclear Deterrence

Namely Player 1 mixes his two pure strategies NB and B with the probabilities ~ p and 1 − ~ p, respectively and Player 2 mixes her two pure strategies NA and A with probabilities q~ and 1 − q~ , respectively. The equilibrium payoffs of the two players are ~ Π1 =

r1 1 + r1 − w1

~ Π2 =

w2 1 − r2 + w2

and

~ ~ ~ ~ * ˆ1 >Π ˆ It is straightforward to show that Π 1* > Π 1 and Π 1 and also Π 2 > Π 2 and Π 2 > Π 2 .

Proof of Proposition 4 (1)

Suppose that E (α ) < α where α =

chooses B with probability

1 − r1 . Consider first the case where Player 1 1 − r1 + w1

1 − r2 1 < p < 1 . In this case μ ( p ) > (follows by (6) and since 1 − r2 + w2 2

μ ( p ) = 1 − λ ( p ) ). By (7) if the signal is b, Player 2 of any type α , the signal is nb, Player 2 plays NA if μ ( p ) < α ≤ 1 and A if

1 < α ≤ 1 , plays A. By (8), if 2

1 < α < μ ( p ) . Let Eα Π 1 ( p ) be the 2

expected payoff of Player 1, if he plays the mixed strategy ( p, 1 − p ) . In this case B is selected with probability p , the signal b is observed with probability α and the signal nb is observed with probability 1 − α . Hence

The Role of Intelligence in Nuclear Deterrence

⎡1 ⎤ μ (p) 1 ⎢ ⎥ E α Π 1 ( p ) = p ⎢ α u1 (B, A) f (α ) d α + (1 − α ) u1 (B, A) f (α ) dα + (1 − α ) u1 (B, NA) f (α ) dα ⎥ + 1 ⎢1 ⎥ μ (p) 2 ⎣2 ⎦







⎡μ ( p ) 1 ⎢ + (1 − p ) ⎢ α u1 (NB, A) f (α ) d α + α u1 (NB, NA) f (α ) dα + ⎢ 1 μ (p) ⎣ 2





Since u1 (B, A) = 0, u1 (B, NA) = 1, u1 (NB, A) = r1

and

⎤ ⎥ (1 − α ) u1 (NB, A) f (α ) dα ⎥ 1 ⎥ 2 ⎦ 1



u1 (NB, NA) = w1

⎡ 1 1 ⎢ Eα Π1 ( p ) = p (1 − α ) f (α ) dα + (1 − p ) ⎢ r1 α f (α ) dα + (w1 − r1 ) α f (α ) dα + r1 ⎢ 1 μ (p) μ ( p) ⎣ 2 1







⎤ ⎥ (1 − α ) f (α ) dα ⎥ 1 ⎥ 2 ⎦ 1



1

Since

∫ f (α ) dα = 1 1 2

Eα Π1 ( p ) = p

1



μ (p)

⎢⎣

∫ (1 − α ) f (α ) dα + (1 − p ) ⎢r1 + (w1 − r1 )



1

∫ α f (α ) dα ⎥⎥ μ( ) p

(8A)



Note that Player 2 of any type α observes neither the mixed strategy played by Player 1 nor his actual action. She only observes the signal sent by the IS. Hence, if Player 1 unilaterally deviates from his mixed strategy ( p, 1 − p ) to any other strategy, the action of Player 2 (as a function of her type α and the signal observed) does not change. In equilibrium Player 1 should be indifferent between playing ( p, 1 − p ) and playing either one of his pure strategies, since 0 < p < 1 . In particular if Player 1 deviates to B (i.e. p = 1 ).

Eα Π 1 ( p ) = Eα Π 1 (1)

(9A)

By (8A) E α Π 1 (1) =

Hence by (8A), (9A) and (10A)

The Role of Intelligence in Nuclear Deterrence

1

∫ (1 − α ) f (α ) d α

μ (p)

(10A)

1

∫ (1 − α ) f (α ) dα = p

μ (p)

1



μ (p)



1



μ (p)

⎥⎦

(1 − α ) f (α ) dα + (1 − p ) ⎢r1 + (w1 − r1 ) ⎢⎣

∫ α f (α ) dα ⎥

Or equivalently, 1

∫ (1 − α ) f (α ) dα = r1 + (w1 − r1 )

μ (p)

1

∫ α f (α ) dα

(11A)

μ (p)

Let g (x ) ≡

be defined for all

1

1

x

x

∫ (1 − α ) f (α ) dα − r1 − (w1 − r1 ) ∫ α f (α ) dα

1 ≤ x ≤ 1 . Since f (α ) is continuous in α , g (x ) is continuous (and differentiable) 2

in x . Also ⎛1⎞ g ⎜ ⎟ = 1 − r1 − (1 − r1 + w1 ) E (α ) ⎝2⎠

By our assumption E (α ) <

Theorem there is x ,

1 − r1 ⎛1⎞ . Hence g ⎜ ⎟ > 0 . Since g (1) = − r1 < 0 by the Mean Value 1 − r1 + w1 ⎝2⎠

1 < x < 1, such that g (x ) = 0 . 2

Next

g ′ ( x ) = − (1 − x ) f ( x ) + (w1 − r1 ) x f (x ) = = − f ( x ) + (1 − r1 + w1 ) x f (x ) Since f ( x ) > 0

g ′ (x ) > 0 iff

The Role of Intelligence in Nuclear Deterrence

x>

1 . 1 − r1 + w1

Hence g decreases for

1 1 1 ≤x< and increases for < x ≤ 1 . Since 2 1 − r1 + w1 1 − r1 + w1

⎛1⎞ g ⎜ ⎟ > 0 and g (1) < 0 then g intersects the x-axis only once. Namely, there is a unique x such ⎝2⎠ that g (x ) = 0 and

1 < x < 1 . It is easy to verify that μ ( p ) increases in p, μ (0) = 0 and μ (1) = 1 . 2

Thus there exists a unique 0 < p1 < 1 such that x = μ ( p1 ) . This implies that p1 is the unique solution of (11A). Also, since x >

1 − r2 1 1 then μ ( p1 ) > which is consistent with and p1 > 1 − r2 + w 2 2 2

our assumption. Next observe that there is no equilibrium strategy ( p , 1 − p ) such that p ≤ E (α ) <

1 − r2 and 1 − r2 + w2

1 − r1 1 . Otherwise μ ( p ) ≤ and (11A) should be replaced by 1 − r1 + w1 2 1

1

1 2

1 2

∫ (1 − α ) f (α ) dα = r1 + (w1 − r1 ) ∫ α f (α ) dα This implies that 1 − E (α ) = r1 + (w1 − r1 ) E (α )

or E (α ) =

1 − r1 , 1 − r1 + w1

a contradiction. We conclude that whenever E (α ) <

1 − r1 there exists a unique equilibrium 1 − r1 + w1

where Player 1 chooses to develop the bomb (B) with probability p =

1 − r2 and Player 2 takes 1 − r2 + w2

the action described in (7) or (8). Namely, if Player 2 observes the signal b, she attacks Player 1 irrespective of her type α . If the signal is nb Player 2 attacks Player 1 iff

The Role of Intelligence in Nuclear Deterrence

1 < α < μ ( p) . 2

(2)

Suppose next that E (α ) >

probability p, 0 < p <

1 − r1 . Consider the case where Player 1 chooses B with 1 − r1 + w1

1 − r2 1 . In this case λ ( p ) > . Similarly to the previous case 1 − r2 + w2 2

⎡λ ( p ) 1 ⎢ Eα Π1 ( p ) = p ⎢ α u1 (B, NA) f (α ) dα + α u1 (B, A) f (α ) dα + ⎢ 1 λ (p) ⎣ 2





⎤ ⎥ (1 − α ) u1 (B, NA) f (α ) dα ⎥ + 1 ⎥ 2 ⎦ 1



⎤ ⎡ λ (p) 1 ⎥ ⎢1 (1 − α ) u1 (NB, NA) f (α ) dα + (1 − α ) u1 (NB, A) f (α ) dα ⎥ + (1 − p ) ⎢ α u1 (NB, NA) f (α ) dα + 1 ⎥ ⎢1 λ (p) 2 ⎦ ⎣2







Since u1 (B, A) = 0, u1 (B, NA) = 1, u1 (NB, A) = r1

u1 (NB, NA) = w1

and

⎡ ⎤ λ ( p) 1 1 ⎡ ⎤ ⎢ ⎥ Eα Π1 ( p ) = p ⎢1 − α f (α )dα ⎥ + (1 − p ) ⎢ (w1 − r1 ) α f (α )dα + r1 + (w1 − r1 ) f (α )dα ⎥ ⎢⎣ λ ( p ) ⎥⎦ 1 ⎢ ⎥ λ (p) 2 ⎣ ⎦







(12A)

In equilibrium where 0 < p < 1 we have Eα Π 1 ( p ) = Eα Π 1 (1)

(13A)

By (12A) and (13A) we have 1

(1 + w1 − r1 )

∫ α f (α ) dα = 1 − r − (w 1

λ (p)

1

λ (p)

− r1 )

∫ f (α ) dα

(14A)

1 2

Let 1

m ( x ) ≡ (1 + w1 − r1 ) α f (α ) dα − (1 − r1 ) + (w1 − r1 )



∫ f (α ) dα

x

be defined for all

x

1 2

1 ≤ x ≤ 1 . Clearly m (x ) is continuous and differentiable. 2

By our assumption

⎛1⎞ m ⎜ ⎟ = (1 + w1 − r1 ) E (α ) − (1 − r1 ) > 0 ⎝2⎠

The Role of Intelligence in Nuclear Deterrence

Also m (1) = − (1 − r1 ) + w1 − r1 = − (1 − w1 ) < 0

In addition m ′ ( x ) = − (1 + w1 − r1 ) x f ( x ) + (w1 − r1 ) f (x )

Since x ≥

1 2 1 ⎡ 1 + w1 − r1 ⎤ m ′ (x ) ≤ f (x ) ⎢− + w1 − r1 ⎥ = − f ( x ) (1 + w1 − r1 ) < 0 2 2 ⎣ ⎦

1 Consequently m ( x ) = 0 has a unique solution x 2 in ⎛⎜ , 1⎞⎟ . ⎝2



Since λ (0 ) = 1, λ (1) = 0 and λ ′ ( p ) < 0 , there exists a unique p 2 such that λ ( p 2 ) = x 2 , 0 < p 2 < 1 . Since x 2 >

1 − r2 1 we have by (5) p 2 < , which is consistent with our assumption. 2 1 − r2 + w 2

Next it is easy to verify (similarly to the previous case) that there is no equilibrium where p≥

1 − r2 1 − r1 1 − r1 while E (α ) > . We conclude that whenever E (α ) > there 1 − r2 + w2 1 − r1 + w1 1 − r1 + w1

exists a unique equilibrium. Player 1 plays the mixed strategy ( p 2 , 1 − p 2 ) , 0 < p 2 <

1 − r2 . As 1 − r2 + w 2

for Player 2, if IS sends the signal b, Player 2 will attack Player 1 iff α > λ ( p 2 ) . If the signal is nb, Player 2 will not attack, irrespective of α . Next, it is easy to verify that there is no equilibrium where Player 1 is playing a pure strategy. Indeed, if Player 1 plays B, then the best reply strategy of 2 is to play A irrespective of the signal or of α . Hence, Player 1 obtains 0. If he deviates to NB and is attacked, he obtains r1 > 0 . Similarly, if in equilibrium, Player 1 plays NB, Player 2’s best reply strategy is NA irrespective of α and Player 1 obtains w1 < 1 . If he deviates to B, he will obtain 1. Consequently, Player 1 is better off deviating from any one of his pure strategies. This completes the proof of the proposition.

„

The Role of Intelligence in Nuclear Deterrence

Proof of Proposition 5 By (7) and (8) it is easy to verify that

⎧ ⎪ p1 w2 + (1 − p1 ) r2 ⎪ ⎪α[ p1 w2 + (1 − p1 ) (1 − r2 )] + (1 − p1 ) r2 ⎪ Π 2 (α ) = ⎨ ⎪1 − p 2 ⎪ ⎪ ⎪⎩α [ p 2 w2 + (1 − p 2 ) (1 − r2 )] + (1 − p 2 ) r2

, E (α ) < α ,

1 < α < μ ( p1 ) 2

, E (α ) < α , μ ( p1 ) < α < 1 , E (α ) > α ,

1 < α < λ ( p2 ) 2

, E (α ) > α , λ ( p 2 ) < α < 1

and the proof follows immediately.

Proof of Proposition 6

Let (1 − p * , p * ) be the equilibrium strategy of Player 1. By (7), (8) and Proposition 4 it is easy to verify that ⎧ * ⎪(1 − p ) r1 ⎪ * * ⎪⎪ ⎡⎣ w1 − r1 − p (1 + w1 − r1 ) ⎤⎦ α + p (1 − r1 ) + r1 Π1 ( α ) = ⎨ ⎪ p* (1 − w ) + w ⎪ ⎪ * * ⎡ ⎤ ⎩⎪ ⎣ w1 − r1 − p (1 + w1 − r1 ) ⎦ α + p (1 − r1 ) + r1

If E (α ) > α then by Proposition 4 p* = p2 < p* = p1 >

1 < α < μ ( p* ) 2 , μ ( p* ) < α < 1

,

1 < α < λ ( p* ) 2 , λ ( p* ) < α < 1

,

1 − r1 and if E (α ) < α then 1 − r1 + w1

1 − r1 . The proof follows immediately. 1 − r1 + w1 „

The Role of Intelligence in Nuclear Deterrence

The Role of Intelligence in Nuclear Deterrence

Nov 2, 2008 - In the presence of IS, Player 2 has four pure strategies. A pure strategy of Player 2 is a pair ( ) yx. , where both y x and are in {. }A. NA, and where x is the action of Player 2 if she observes the signal nb and y is her action if she observes the signal b. The strategic form of the game between the two players is ...

181KB Sizes 0 Downloads 150 Views

Recommend Documents

Residual Deterrence
School of Economics, and at the University of Edinburgh for helpful discussions. ... drink driving, lead to reductions in offending that extend past the end of the ...

Naive Theories of Intelligence and the Role of Processing ... - CiteSeerX
their assistance with data collection and Wendi Gardner, Galen Boden- hausen, David ... fluency decreases and the effort they dedicate to the task increases.

Naive Theories of Intelligence and the Role of Processing ... - CiteSeerX
This article is based in part on a doctoral dissertation submitted by. David B. Miele to ... fluency and naive theories affect judgments of learning. The Effects of .... have conducted an extensive program of research that examines how people's .....

The Growing Threat of Nuclear War and the Role of the Health ...
a situation, Indian nuclear doctrine calls for massive ..... Conference outcome document referred for the first time to ..... health for all http://apps.who.int/iris/bit-.

The Growing Threat of Nuclear War and the Role of the Health ...
a situation, Indian nuclear doctrine calls for massive ..... Conference outcome document referred for the first time to ..... health for all http://apps.who.int/iris/bit-.

The Role of the EU in Changing the Role of the Military ...
of democracy promotion pursued by other countries have included such forms as control (e.g. building democracies in Iraq and Afghanistan by the United States ...

The Role of the Syllable in Lexical Segmentation in ... - CiteSeerX
Dec 27, 2001 - Third, recent data indicate that the syllable effect may be linked to specific acous- .... classification units and the lexical entries in order to recover the intended parse. ... 1990), similar costs should be obtained for onset and o

The Deterrence Effects of US Merger Policy Instruments ...
Tel: +49 30 2549 1404. June 22, 2009. Abstract: We estimate the deterrence effects of U.S. merger policy instruments with respect to the composition and ...

Residual Deterrence
ual deterrence occurs when reductions in offending follow a phase of active .... evaluation of “top antitrust authorities” focuses on successful prosecutions. ... similar features to ours in the so-called “bad news” case, where the worker inc

Residual Deterrence
to Environmental Protection Agency fines against other nearby mills (Shimshack and. Ward, 2005); reductions ... offense is only worthwhile for a firm if the regulator is not inspecting, while inspecting is only worthwhile for the ..... to have data o

The Deterrence Effects of US Merger Policy Instruments ...
Jun 22, 2009 - for more specific analysis of merger policy instruments as opposed to the ... employ panel-data techniques to infer whether the conditional ... Another benefit from invoking the extensive literature on crime-and- .... 1985) is also fir

The Role of Translocation in Recovery of ... - Wiley Online Library
recently from Banff National Park, Canada, and translocations of caribou to Banff and neighboring Jasper. National Park are being considered. We used population viability analysis to assess the relative need for and benefits from translocation of ind

The Role of Media Techniques in Management of Political Crises.pdf ...
The Role of Media Techniques in Management of Political Crises.pdf. The Role of Media Techniques in Management of Political Crises.pdf. Open. Extract.

The role of government in determining the school ...
Apr 14, 2011 - span the arc, no coherence without chronology. .... If you have found this paper interesting, why not join our email list to receive occasional.

The Role of the Founder in Creating Organizational ... - Science Direct
4. Developing consensus on the criteria to be used in measuring how well the group is ... What is considered to be the “right” way for people to relate to ..... for getting parking spaces; many conference .... would still call daily from his reti

The Role of Television in the Construction
study, we report survey data that test the relationship between television exposure and the perceived ... called objective reality (census data, surveys, etc.). Con- tent analyses of television programs have ..... ism may have been due to the low rel

The role of self-determined motivation in the ...
multivariate analysis of variance revealed that exercisers in the maintenance stage of change ..... Our exploratory prediction .... very good fit to the data and was invariant across the sexes. .... 0.14 is considered a large effect size (Cohen, 1988

The role of consumption substitutability in the ... - Isabelle MEJEAN
boosts employment but does not allow residents to purchase enough additional consumption ...... Framework, Chapter 3, Ph.D. dissertation, Princeton University.

The weakening role of science in the management ... - Oxford Journals
tel: +1 709 772-2341; fax: +1 709 772-4105; e-mail: [email protected]. Introduction ... For Permissions, please email: [email protected]. 723 .... which options were put forward in light of stock trends and expected ...

The role of Research Libraries in the creation, archiving ... - RLUK
various initiatives, such as the production and preservation of tools, but also into the different models of .... of support employed by US libraries when it comes to DH research; the service model, the lab model and ...... institutional material); s