Timing Disclosure of Software Vulnerability for Optimal Social Welfare Ashish Arora

Rahul Telang

Hao Xu

H. John Heinz III School of Public Policy and Management Carnegie Mellon University, Pittsburgh PA 15213 Email: {ashish; rtelang; xhao}@andrew.cmu.edu November 2003 Abstract Information security breaches are usually associated with exploitation of software vulnerabilities. There has been a considerable debate over if known vulnerability should be disclosed to the public immediately after the discovery. While instant disclosure may push vendor for quicker patch, secrecy policy avoids the risk of supplying attacker exploitable vulnerabilities. Timing disclosure of vulnerability has become a real and ever important policy question. However, this area lacks both theoretical and empirical research. Using a game-theoretical approach, our model studies how social planner may time disclosure of vulnerability to minimize total social cost. We show that although early disclosure does prompt vendor for quicker patch, it is not necessarily optimal. Rather, optimal disclosure time depends on various other factors as well. In a general case, neither instant disclosure nor secrecy policy is optimal. We show how stage in product life cycle and vendor-side liability condition optimal disclosure. We then extend the model to answer two important questions concerning social planner: the impact of uncertainty in patch-developing and diffusion of patching on disclosure policy. This paper helps in understanding how disclosure policy may affect vendor’s decision and how social planner should determine disclosure policy in response. This model may be reformulated as decision tools for social planner when emp irical studies are sufficient.

1.

Introduction Information security breaches pose a significant and increasing threat to national

security and economic wellbeing. According to Symantec Internet Security Threat Report (2003), among the last two quarters of 2002, each company of their dataset experienced 30 attacks per company per week. These attacks often exploit software defects or vulnerabilities. 1 About 85% of these attack activities may be classified as pre-attack reconnaissance, or better known as “scans”, which search targets for exploitable vulnerabilities Over the last few years, the number of vulnerabilities found and disclosed has exploded. Symantec report (2003) documents 2,524 new vulnerabilities in year 2002, an 81.5% increase over 2001. These documented vulnerabilities affect over 2000 distinct products. The CERT/CC (Computer Emergency Response Team / Coordination Center) has received over 4000 reports of vulnerabilities in the year 2002 alone and has reported more than 82,000 incidents involving various cyber attacks. While estimate of the damages incurred due to vulnerabilities is an open empirical question, anecdotally evidence suggests that such losses run in millions. For example, CSI (Computer Security Institute) and FBI estimated that unit cost impact per organization across all types of breaches was around $972,857 in year 2000. Software vendors, including Microsoft, have announced their intention to increase the quality of their products and reduce vulnerabilities. Despite this, it 1

The shutting down of the eBay and Yahoo! websites due to hacker attacks and the Code Red virus, which affected more than 300,000 computers are just two well known examples where software defects were exploited.

2

is likely that vulnerabilities will continue to be discovered and disclosed. 2 While some vulnerabilities were first discovered and exploited by attackers, a majority of vulnerabilities were discovered by entities with good intention, including benign users, security firms and vendors. Most of these vulnerabilities were eventually disclosed to the public. Vulnerability disclosure may be both beneficial and dangerous. Note once a vulnerability is disclosed, it is available to every one. Hence, disclosure may be used to develop quick patch or exploited by attackers. At the current state, there are no guidelines or enforcing rules for disclosing vulnerabilities. Anecdotal evidence shows that inappropriately disclosed vulnerabilities 3 become a major supply of attackers’ arsenal; on the other hand, some vendors are not delivering timely patches simply because vulnerability identifiers follow the norm that a vulnerability should not been disclosed before the patch is available, therefore put little pressure on vendor. When and how to disclose vulnerability has become an ever-important, and emergent policy question. The current vulnerability disclosure process usually involves an individual or a firm finding a vulnerability and either contacting the vendor directly or an institution like CERT. CERT then contacts the vendor and provides them with certain time window to patch the vulnerability (provide a solution so that the vulnerability could not be exploited). After that time window elapses, the vulnerability (along with the patch, if available) is publicly 2

Incidence of security breaches and exploitation of known vulnerabilities has also attracted media attention. A Symantec report (2003) provides an overview of such incidences. 3 In most cases, inappropriate disclosures mean that these vulnerabilities were disclosed too early, mostly instantly after the discovery.

3

disclosed. While no one doubts that appropriate dissemination of vulnerability is valuable because it enables users to protect themselves from possible exploitation, and it improves the subsequent versions of software, there is considerable debate about when and how the vulnerabilities should be disclosed. Originally, vulnerability discoverers usually reported vulnerabilities to vendors and kept it secret. The argument was that the vendor would come up with a remediation (or a patch) and make the vulnerability public sometimes at a suitable time 4 . However, many discoverers realized that vendors did not take any steps in either releasing vulnerability information or patch in time to protect customers. This led to the creation of full-disclosure mailing lists and forums in late 90’s. 5 Full-disclosure mailing lists and forums like “Bugtraq” are well known. The proponents of full disclosure claim that threat of instant disclosure pushes the vendors to patch earlier. It also increases public awareness, improves the quality of software over time and customers using the software pressurize the vendor to issue high quality patches quickly improving the welfare.

According to a Network

Magazine article (2000), vendors are patching earlier than before due to such efforts. But there are many who believe that disclosure of vulnerabilities is dangerous, for it leaves users defenseless against hackers who may develop ways to exploit the vulnerability. The proponents of secrecy are willing to give software vendors as much time as needed to

4 5

Note that remediating a vulnerability and issuing a patch is costly for a vendor.

Full disclosure also means that detailed information about vulnerability is made available to the public. In this research our focus is on “when” rather than “how much” information be made available.

4

issue the patch. In fact, at the 2002 Black Hat Conference of Information Technology, Richard Clarke 6 , President Bush's special advisor for cyber space security, criticized full disclosure saying that, “It's irresponsible and sometimes extremely damaging to release information before the patch is out.” For more details on this debate see (Farrow 2000; Levy 2001, Rauch 1999) 7 As the citations indicate, the public policy problem is real and likely to become ever more important over time. However, there is little extant research on the economics and public policy of vulnerability disclosure. We lack both a theoretical framework as well as empirical estimates of the relevant parameters. For instance, does a policy of early disclosure prompt vendors to patch more quickly, and if so, is that better for users, who may get the patch sooner but are also more vulnerable in the interim? More generally, how does vendor behavior vary with the policy regarding disclosure, and how much time should a vendor have to develop a patch before a vulnerability is publicly disclosed? One major goal of this paper is to develop a theoretical framework of optimal policy for vulnerability disclosure such that social loss is minimized. As mentioned, we lack empirical research of relevant parameters, therefore we avoid specific functional forms or parameters, rather we present a model of general form, which makes it flexible and allows us to specify the functions and parameters when empirical study is ready. Based on the model, we can immediately answer several general questions. We show that early disclosure is an 6 7

Refer to: http://www.blackhat.com/html/bh-usa-02/bh-usa-02-speakers.html#Richard Clarke See also the debate between Robert Graham and Bruce Schneider. http://www.robertgraham.com/diary/disclosure.html

5

effective way of prompting vendors for a quicker patch. However, quicker patch is not necessarily beneficial in terms of social welfare. Instant disclosure is not always better than secrecy policy, neither is secrecy policy. When consumer cost dominates vendor cost, vendor opts for quicker patch. However, early disclosure policy is equally effective on vendor’s patching time no matter which cost dominates the other. Lacking of data becomes a major obstacle when we try to make further practical use of the theoretical model. In what way may this model help social planner in decision making even before empirical study is available? During our discussions with CERT8 , it also became clear tha t even at this early stage, the model may help answer a couple of crucial and practical questions. We investigate how exogenous factors such as the stage in the product life cycle and vendor-side liability condition the optimal policy. We show that social planner should choose earlier disclosure when vulnerability is discovered late in the software lifecycle or vendor is held for more liability. Using the same theoretical building blocks, we then extend our model to the case when patching time is uncertain and stochastic. We show that uncertainty costs both vendor and social planner more, therefore vendor chooses to patch more quickly, social planner chooses earlier disclosure to achieve optimal social outcome. We also extend the basic model to allow diffusion of patching. In summary, we provide a baseline model upon which one may develop sophisticated 8

We thank Shawn Hernan and Howard Lipson for their useful and insightful discussion on vulnerability and disclosure policy.

6

decision tools for social planner regarding vulnerability disclosure policy. The model answers several generally concerned questions such as whether early disclosure is effective in pressuring vendor for quicker patch, whether instant disclosure is superior to secrecy policy, whether effectiveness of early disclosure varies in which cost dominates the other, and so on. In addition, we discuss how the model may immediately help social planner in decision making, particularly how stage in the product life cycle, vendor-side liability, uncertainty in vendor’s effort in patch developing and diffusion of patching condition social planner’s optimal disclosure policy. We would like to point out that by no means this is a final say of this matter, rather it motivates us in further exploration of this area. The immediate follow- up question is empirical study to parameterize the baseline model. The paper is organized as fo llows. In section 2, we review relevant prior work on issues related to software vulnerability. We present the basic economic model in section 3 and show how a social planner should choose an optimal disclosure time ‘T’. We also provide comparisons between instant disclosure and secrecy policy in section 3. In section 4, we extend the basic model to allow for uncertainty in patching time. We discuss how social planner should adjust T to improve social welfare. In section 5, we extend the model to incorporate diffusion of patching such that only a portion of customers apply patch when patch is available, and the rest gradually apply patch. Concluding remarks and implication of results are presented in section 6.

2.

Prior Literature

7

There is a rich literature on the technical aspects of software vulnerability research. We will only review prior works contributed to the modeling of this paper. Krsul, Spafford and Tripunitara (1998) classify common vulnerabilities in four major categories. They discuss the characteristics of vulnerability, violations by its exploitation and approaches to prevent these violations. John Howard (1998) has provided a comprehensive study on Internet security incidents, including taxonomy of computer attacks, classification of intrusions. Incidence of security breaches and exploitation of known vulnerabilities also have attracted a lot of media attention. There has also been some work on tracking and tracing cyber attack. Howard Lipson (2002) provides an overview of technical approaches and policy implications for cyber attacks. Related empirical work has been devoted to trend analysis of vulnerabilities. Shimell and Williams (2002) present a framework for trend analysis. They discuss factors in implementing such a framework. Arbaugh et al (2000) propose a life cycle model for vulnerability analysis and show how frequently vulnerability is exploited since the time it is made public. These researches have been critical in developing our model. Recently, some papers have looked at the economic aspects of software vulnerabilities and patching issues. Varian (2000) points out that the key economic aspect of managing information security is to align legal liability to best suitable party. Our model shows that legal liability plays an important role in vendor’s patching decision and affects the choosing of optimal disclosure policy. In our model, the best suitable party is the

8

vendor. Arora, Caulkins and Telang (2003) model a software firm’s decision on when to enter the market and how much to invest in patching the vulnerabilities later. Early release means that the software is more buggy. Interestingly, they show that it is a socially beneficial outcome for vendors to release early and patch later. Beatie et al. (2002) develop an optimization model for optimal time for applying patches. There is a tradeoff between suffering instability induced by bugs in the patches from patching it too soon and being hacked if patch too late. Kannan, Telang and Xu (2003) present a paper on the market for software vulnerability and show that generally market based mechanism reduces user welfare. However, there is little extant literature on the relationship between vulnerability disclosure timing and the timing of patch releases by vendors, and the factors that condition this relationship. Thus there is little guidance on optimal timing of vulnerability disclosure. This paper addresses this gap. Our work provides an overarching view of disclosure policy and its impact on social welfare as well as on different parties (like software vendors, users etc.). Our work parallels the disclosure policy adopted by CERT/CC9 CERT allows a secret period of 45 days after it receives information about vulnerability. After this time, the information regarding this vulnerability is disclosed to the public in the form of an advisory. Clearly, the objective of CERT policy is to combine the benefits of both instant and secrecy policy. We show in our paper how an optimal T depends on factors such as

9

CERT/CC is a powerful player in dissemination vulnerability information to the public.

9

vendor-side liability and discovery time of vulnerability and that one size fit all approach leads to inefficiencies.

3.

Model We first formalize the vendor’s cost and loss to customers, and total social cost. We

then show how vendor times patch development and how social planner determines disclosure time T minimize social cost . We then discuss implications of the model on disclosure policies. Finally, we quickly compare instant disclosure policy with secrecy policy. There are four major participants in our model -social planner, vendor, customer and attacker. Social planner chooses a disclosure policy to maximize social welfare. Vendor responds to change in disclosure policy by allocating capital in patching vulnerability to minimize his cost. Customers incur loss when the vulnerability in their system in exploited by attackers. We treat the disclosure policy as binary. Either full information is disclosed or none. Hence, a disclosure policy is the choice of a time T, such that during that time vulnerability information is kept secret from public and shared with only the vendor to allow it to develop a patch. Once time T has elapsed, the information is disclosed to the public irrespective of the availability of patch. Instant disclosure policy means T = 0 while secrecy policy implies a T = ∞ . It is useful at this point to show a typical lifecycle of a software product, as graphically

10

illustrated in Figure 3.1., and how vulnerability is found, disclosed and patched.

Figure 3.1. Software Life Cycle At time ‘0’ the product is released and used by users. 10 Vulnerability is discovered by a benign user11 at calendar time t 0 . Disclosure policy T requires that this vulnerability is kept secret until time T + t 0 and disclosed after that. Vendors provide a patch for this vulnerability at a calendar time τ + t 0 .12 Attackers might find and exploit at time s + t 0 . Note that we assume that attackers can exploit this vulnerability instantly upon knowing it. According to Symantec Report (2003), approximately 60% of the documented vulnerabilities can be exploited almost instantly either because exploit codes are widely available for free download or because exploit tool is not needed. For those vulnerabilities that need exploit tools, our model may be modified slightly to allow an exploitation-tool-development period. Since it only adds a constant to the model, one may easily show that results of our model hold true. Therefore, in the rest of this paper we will use instant exploitation assumption for convenience. Note that τ , T and s are simply the time windows of patch developing, disclosure by social planner and discovery by attack respectively, measured from the calendar time t 0 , which is the time when the vulnerability 10

We do not consider the diffusion of the product. We assume that all interested users start using the product at time ‘0’. Benign user is not interested in exploiting this vulnerability. Also note that if the vulnerability of first discovered by a hacker then it will immediately attack the systems and any disclosure policy is a moot point. 12 Note that τ can be higher or lower than T 11

11

is first known.

3.1 Vendor’s Cost Function Given a disclosure policy T, the software vendor makes decision on allocating its resources in making the patch available. But patching is costly and therefore vendor is interested in minimizing its cost. Vendor faces two kinds of costs. First, there is the patch development cost which is also interpreted as an opportunity cost. The more resource it allocates to patching, quicker the patch and higher the cost. In this section, we assume that vendor only decides on how quickly it develops patch, later we will allow vendor to determine quality of patch. Recall that τ is the time window of patch developing. In this model, it is used as a proxy of vendor’s resource allocation. C P (τ ) denotes vendor’s patch developing cost. costs. Hence, it is intuitive to assume that

Shorter time τ means higher

∂C P (τ ) < 0 . Marginal utility of freed resources ∂τ

should be decreasing, as commonly assumed. Hence, with respect to τ , marginal cost ∂ 2 CP (τ ) should also be increasing. Therefore, we also have > 0 , i.e. it is convex in τ . ∂τ 2 The second cost is a proportion of customer loss that vendor internalizes. Although the full loss is incurred by customers, a proportion of it is reflected in vendor’s cost. This represents reputation loss, loss of future sales, maintenance cost or simply some kind of legal liability. From our knowledge, currently vendors do not have legal liability on

12

information security breach of their customers due to vulnerabilities in their products although legal liability was recommended by researchers (Hal Varian (2000)) and legal workers. Later in this paper, we will show that our model supports aligning legal liability to vendors. We represent this proportion by λ and call it liability factor. This represents mostly market liability for now, and may incorporate legal liability as well in the future. We represent the expected customer loss due to vulnerability as θ (τ , T : X ) . Note that customer loss is function of T and τ and also X, which are customer specific or vulnerability specific factors. 13 Hence, vendor’ cost is as follows:

V = CP (τ ) + λθ (τ , T : X )

(3.1)

where λ is the liability factor. We then study the structure of customer loss function. 3.1.1 Customer Loss Function At this point, we need to be more specific about what the form of θ (τ , T : X ) is. We first illustrate under what conditions attacker may exploit and cause loss to customers: C1: Before patch is available, attacker finds the vulnerability on his own; C2: Vulnerability is disclosed without patch by social planner.

13

X could be industry specific. For example, vulnerabilities in financial software usually cause more damage than those in personal education software. X may also denote easiness of exploitation. Some vulnerabilities, such as the .NET vulnerability, need no expertise in exploiting it while others may require special expertise. X could also represent the level of severity, size of customer group, and so on.

13

We denote D(t;X) as customer loss if they are attacked for a duration t. We assume that customer loss is only a function of duration and does not depend on at what point in software lifecycle the exploitation occurs. According to Arbaugh et al (2000):”Intrusions increase once the community discovers a vulnerability with the rate of accelerating as news of the vulnerability spreads to a wider audience,” we also assume that D is convex in t, meaning that the longer the exposure time, the higher the incremental damage from every additional time unit of exposure.

Now we may characterize the specific structure ofθ (τ , T : X ) . It is clear that θ will critically depend on when the patch is made available (τ ) and when the vulnerability is disclosed (T). Consider the following two cases: C3: Patch is released before T; C4: Patch is released after T. Under case C3, customers suffer loss only if C1 is also true. Referring to figure 3.1, s + t 0 is the time when attacker finds the vulnerability and τ + t0 is the time when patch is released. Customers are attacked between calendar time s + t 0 and τ + t 0 . Hence, customer loss is D (τ − s ) . On the other hand, if the patch is released after T (i.e. case C4), there are two considerations. First, attacker can find the vulnerability on its own (C1), and have

τ − s 14 of time to exploit. Second, under the case of C2, attacker has τ − T time to

14

Note that here we omit customer loss after patch is available. In reality, patched vulnerability still causes damage to customers due to not patching. We will study this issue in later section. Furthermore, since the introduction of self-patching or self-updating software, software may automatically patch itself.

14

exploit it. Note that vendor’s discovering vulnerability is a stochastic process. We denote the distribution of s by F(s). Therefore, the probability that attacker does not find it within period T is simply 1 − F (T : t 0 ) , where t 0 is the calendar time when the vulnerability was first discovered. Note that F ( s : t 0 ) is conditional on the vulnerability not being discovered by the attacker before t 0

15

We assume that, when

t 0 increases,

F ( s : t 0 ) increases because with time attackers accumulate experience and knowledge about the software and may find the vulnerability earlier. Finally we characterize customer loss as follows:

 τ D(τ − s ) dF ( s : t ), when τ ≤ T 0 ∫0 θ (τ , T ; X ) =  T ∫ D(τ − s ) dF ( s : t 0 ) + (1 − F (T : t 0 )) D(τ − T ), when τ > T 0

(3.2)

As explained, the first part of the function is customer loss when patch is released earlier than T and attacker finds the vulnerability at a time s and attacks for the durationτ − s . The second part is when patch is released after T, and attacker can either find it either before T and attack for τ − s or find about it at time T when it is disclosed by social planner and attack for durationτ − T . Since D is convex in t, function θ is convex in τ . Moreover, since both Cp and θ are convex in τ , vendor’s cost V (equation 3.1) is also convex in τ . 3.2 Social Cost

15

If hacker is the first to discover the vulnerability then any disclosure policy T is moot.

15

Social cost is simply

S = CP (τ ) + θ (τ , T )

(3.3)

as explained before, Cp is cost of patching to the vendor and θ is the loss to the customers Note that vendor’s cost function V, converges to S when λ = 1 . Recall that λ is the liability factor. When λ = 1 , vendor internalized all of the customer loss, therefore interests of vendor and social planner is well aligned. In that situation, ve ndor would act just like vendor and hence secrecy policy is always the optimal disclosure. Considering vulnerability disclosure alone, it seems that vendor is the best party to whom legal liability should be aligned. Also it is immediate that S is convex in τ . With this specification, we are in a position to analyze the efficacies of various disclosure policies and find an optimal T* that maximizes the social welfare. Since, instant disclosure policy and secrecy policy is a special case of a general disclosure policy T*, we will first start with the general case. Social Planner’s Decision Game Note that for λ ? [0,1), vendor’s incentives and social planner’s incentives not aligned. The social planner wants to choose an optimal T* but the choice of T affects vendor’s decision. i.e. τ . Clearly, the sequence of the decision making is critical. In other words, who moves first, planner or the vendor critically affects the optimal solution. This game can be played in three different ways: 1)

In a simultaneous game, social planner and vendor choose their optimal strategies

16

simultaneously; 2)

Vendor decides first and social planner follows;

3)

Social planner makes decision first and vendor follows;

Note that the last two types are Stackelberg equilibrium. It is easy to see that the first two games lead to rather trivial outcomes and hence meaningless. (See Appendix 1) Moreover, in practice, it is always CERT or some other organization that announces disclosure policy T first and vendor reacts optimally. Therefore, we focus on the third structure where policy maker announces a time T and vendor reacts to it optimally Recall that from equation (3.4) first order condition (FOC) for social planner’s optimal disclosure policy T * is

∂C P ∂τ ∂θ ∂τ ∂θ + + =0 ∂τ ∂T ∂τ ∂T ∂T

(3.5)

There would be a solution to this equation provides us with optimal T* if there exists an interior minimum for social planner’s cost. But note that, in general, we do not know whether there exists such an interior point or not. In fact, social planner’s cost could possibly be minimized at corner points, i.e. T * = 0 or ∞ , which is called instant disclosure or secrecy policy, accordingly. However, in our experiments, we found that corner points, although possible, rarely occur. Also, the objective of this paper is to study social planner’s optimal policy in general. Special cases, i.e. instant disclosure and secrecy policy, are not the key interest. Hence, we will assume local convexity, which guarantee interior point, in social planner’s cost function.

17

Also note that, as we expect, T* depends on vendor’s reactio n to T. In other words, on

∂τ . Hence, in the following section, we first outline the vendor’s reaction function to ∂T

disclosure policy T. 3.3 Implications of the Model on Vendor’s Patching Now that we have defined the characteristics of the model. We will show in this and the next subsection the implications from the model. Given convexity in patch developing cost C P (τ ) and θ (τ ; T ) , there exists a solution for vendor’s cost- minimization problem. The first order optimization condition from equation 3.1 is as follows: T ∂C P + λ ∫ D′(τ − s ) dF ( s : t 0 ) + (1 − F (T : t 0 )) D′(τ − T )  = 0  0  ∂τ

(3.6)

The key to note is that there is an upper bound and lower bound to optimalτ such that even when T = 0, i.e. instant disclosure, vendor would not like to patch before τ l Second, when T = ∞ , i.e. in the absence of any disclosure policy (or complete secrecy policy) vendor would still patch it as some τ s because of ?. Therefore, any influence on vendor should be considered within these bounds [ τ l τ s ]. We are now positioned to draw some important implications from the model. How Does Vendor React to Change in Disclosure Time T? Although instant disclosure supporters widely postulate that early disclosure can push vendors for quicker patch, it is not theoretically formulated before. It is critical to show that this postulation is true. Otherwise, secrecy policy is always the optimal policy. The

18

following proposition shows that an effective way to force the vendor to release the patch earlier is decreasing T. Proposition 3.1: For any T < τ S ,

dτ * >0 dT

Here τ * is vendor’s optimal patching time. See the Appendix 2 for the proof. Note that as explained before, when T > τ S , T has no pressure on vendor therefore has no influence on patching time. In other words, we have

dτ * =0. dT

It is also true that for any T < τ S , vendor always patch before the disclosure. Proposition 3.2: For any T < τ S , vendor’s optimal patching time τ * is bounded such that T < τ * < τ s Figure 3.2 illustrates vendor’s reaction function to disclosure policy T. Vendor’s patching time increases in T and is always larger than T. until T reaches the threshold point τS .

19

Figure 3.2 Vendor’s Reaction Function to Disclosure Policy How Does Vendor React to Change in Liability? Intuitive ly we also expect that if vendor can be held more liable for damage to customers, vendor would patch earlier. This is immediately clear from the fact that ? = 1, vendor’s and social planner’s objective functions are comple tely aligned. Formally, Proposition 3.3 provides the evidence. Proposition 3.3: For a given T,

dτ * ≤0 dλ

(see proof in appendix 2.) Liability factor may be interpreted in three ways: 1) Software vendors of different industry groups may assume different level of market liabilities. Intuitively, monopoly vendors, such as Microsoft, may feel less obligated to loss to customers than vendors who face strong competition since they are less worried about loss of future sales. Hence, increasing competition among vendors can increase vendor’s willingness for quick patch; 2) Currently, no legal liability is aligned to vendor. However, if legal liability is imposed on vendors in the future, from the model, it is clear that even partial legal liability would contribute to prompt vendors for quick patch therefore reduce social cost; 3). The ratio of patch-developing cost and customer loss is an interesting factor. r =

θ . When CP (τ )

customer loss dominates, r is large. Note that λ can be seen as a proxy for this ratio. Larger λ can also be interpreted that customer loss dominates patch-developing cost. In that

20

case, one may expect that vendor would patch earlier. How Does Stages in Product Lifecycle Affect Vendor’s Timing?

It is essent ial for social planner to understand how stages in software lifecycle conditions vendor’s timing. For products that have been on the market for long, potential hackers probably have accumulated experience in exploiting vulnerabilities in them. Therefore, if the vulnerability is discovered late in the product life cycle (t 0 ), one would expect that hacker may be able to find it quickly (lower expected s) and cause more damage. Therefore, we show that all else held same, vendors will patch them more quickly. dτ * Proposition 3.4: All else held same, we have < 0. dt 0 See Appendix 2 for proof. 3.4 Social Planner’s Timing of Vulnerability Disclosure Last section, we outlined vendor’s reaction to disclosure policy T and other factors like liability factor λ and discovery time t 0 . Both λ and t 0 are important parameters. Note that social planners may affect both factors. By affecting these factors, social cost may be lowered. One approach is to increase vendor-side liability, i.e. increase λ . Another approach is to encourage benign identifiers to discover vulnerability earlier, i.e. decrease t 0 . While we can easily show, in our model, both approaches are effective, it is not clear how these approaches affect the choosing of optimal disclosure policy. We start this analysis with the first approaches.

21

Proposition 3.5:

dT <0 dλ

(See appendix 2 for proof.). Proposition 3.1 indicates that when liability factor increases, social planner should decrease T. It means that increase in liability not only reduces social cost when T is considered fixed, but also gives social planner more room to reduce T to achieve even better result. Figure 3.3 shows that when λ increases, bothT * and τ * decreases and the gap between the two diminishes.

τ*

T*

λ

Figure3.3. Optimal Disclosure Policy and Optimal Patching Time as Functions of Liability Factor Proposition 3.2:

dT <0 dt 0

(See appendix 1 for proof.). Proposition 3.2 indicates that if discovers find vulnerability earlier than before, T should

22

increase. The intuition is as follows. At the early stage of software cycle, attackers acquire little experience and therefore less likely discover the vulnerability. Hence, it is not necessary to rush in developing patch and incur large cost at vendor side. 3.5 Comparison of Instant Disclosure and Secrecy Policy Now we compare instant disclosure and secrecy policy quickly. First of all, from proposition 3.1, it is obvious that vendor patches earlier under instant disclosure than under secrecy policy, i.e. τ I ≤ τ S , and for any λ < 1 , the strict equality holds. Hence, it is fair to say that instant disclosure is effective in pushing vendors to patch earlier. However, earlier patch does not necessarily mean lower social cost. We conducted simulation, in which we assume special functional form. In the simulation, we assume C P (τ ) = 1 + e − k1 .τ D(τ ) = k 2τ 2 F ( s) = k 3τ

16

In most cases, neither instant disclosure nor secrecy policy is optimal. Also, we found that, from social planner’s point of view, neither policy is dominant over the other. To illustrate, we run the following experimental test. Note that when liability factor increases, social cost decreases. In this example, instant policy performs badly. But at 0.1, instant policy beats secrecy policy. Social Cost

Instant Policy

Secrecy Policy

Liability Factor

16

Note that the above functions satisfy the assumptions of the model.

23

0.02 0.04 0.06 0.08 0.1 0.2 0.3 0.4

9.72 4.87 3.77 2.94 2.62 2.17 2.05 1.99

7.06 4.33 3.41 2.93 2.73 1.99 1.93 1.88

Table 3.1

4. Stochastic Patching Time The basic model implicitly assumes deterministic patching time. In reality, vendor determines resource allocation. Patching time, as the outcome of resource allocation, may well be stochastic. In this section, we discuss impact of stochastic patching time on vendor’s decision and social planner’s optimal disclosure policy. We begin with defining vendor and social planner’s cost function. In

this

model,

vendor

chooses

mean

patching

time,

denoted

by µ so

that E (τ ) = µ . Φ (τ : µ ) is the probability distribution of patching time. We define vendor’s cost function as follows:

V = C P ( µ ) + λ ∫ θ (τ , T )dΦ (τ : µ ) e

0

(4.1)

Note that vendor’s patching cost is deterministic. e + t 0 is the calendar time of the end of software lifecycle. Social cost differs from vendor cost only in liability factor.

S = CP ( µ ) + ∫ θ (τ , T ) dΦ (τ : µ ) e

0

(4.2)

As in the general model, we assume V is convex in µ and S is convex in T. To

24

investigate whether propositions of 3.1 and 3.3 hold in this stochastic case, we also use the following assumption: F.S.D Assumption: If µ1 < µ 2 , thenτ 1 p F .S . D τ 2 , where E (τ i ) = µ i , for i = 1, 2

(4.3)

τ 1 p F .S . D τ 2 denotes that τ 1 first-order stochastic dominate τ 2 . τ 1 p F .S . D τ 2 is a sufficient condition for µ1 < µ 2 while the opposite is not true. This is a convenient assumption used widely because many theorems about F.S.D have been developed.. We show that when we replace τ with µ , proposition 3.1 and proposition 3.3 are preserved under the situation of stochasticity. In other words, µ * decreases when T is reduced by social planner or λ is larger. Social planner reduces T in reaction to decrement in λ and t 0 .Proofs are available in appendix 3. Hence, the basic model can be extended to stochastic case. Now consider the impact of variation in patching time on vendor and social planner’s decision. Our objective is to find the sign of

dτ * dT * and . dσ dσ

Without further assumption about the functional form of vendor cost function, the sign of

dτ * ∂V is undetermined. However, if is convex in µ (note that we assumed V is convex dσ ∂µ

in µ ), we have an extra proposition that vendor chooses to patch more quickly if it perceives larger variation in patching time. The intuition is that larger variation means larger uncertainty and increases vendor cost, therefore vendor would like quicker patch to reduce expected loss.

25

µ * is vendor’s optimal patching time. σ is the standard deviation of patching time. As widely used in other research about uncertainty, second-order stochastic dominance is a good measure of variation. Hence, we use the following assumption: S.O.S.D Assumption: If σ < σ~, all else held same, we have τ f S. S . D τ~. Proposition 4.1: If we have

∂V dµ * is convex in µ , <0 ∂µ dσ

(4.4)

Larger variation in patching time also incurs more loss to social planner. We show that social planner should also decrease T in response to increase in variation. Intuitively, larger variation in patching time has similar effect as later discovery of vulnerability. Both increase vendor and social planner’s costs. Under both cases, vendor chooses to patch earlier and social planner should tighten disclosure time T. Proposition 4.2: If we have

∂V dT is convex in µ , < 0. ∂µ dσ

5. Diffusion of Patching Until now we assumed that all customers would patch immediately after the patch is available. However, ample evidences show that many customers delay their patching. InternetNews.com reported, in August 2000, that six months after the DOS attacks that paralyzed several high-profile Internet sites, more than 100,000 machines were detected still not patched and vulnerable. The reason of delay may be two-fold. First, it takes time to disseminate the patching information to all users. Second, some users are aware of the patch, but would wait to assure that the patch is more likely to prevent damage than it may

26

cause. An example of patch of bad quality is the Microsoft patch for CVE-2001-0016 (Beatie, et al, 2002).The initial patch disables many updates of service pack 2 of Windows NT, therefore makes the patched system even more vulnerable to attacks. We will study the impact of diffusion of patching and show that in many cases, the basic model is sufficient even with consideration of diffusion of patching. This study starts with considering how diffusion of patching may concern vendor. The following questions are essential: 1)

Will vendor be liable for post-patching-cost? Existing empirical research can not assure that post-patching-cost is reflected in vendor’s cost function. One may argue that it is customers’ responsibility to patch in time, thereby vendor is not liable. If this point is correct, obviously the basic model works fine even if consider diffusion of patching.

2)

How large is post-patching-cost compared to prior-to-patching cost and patch development cost? In some cases, such as the DOS attack example, the post-patching-cost is fairly large. However, it is perceivable that home-users of popular software are the most probable victim. For non-home- users, if the post-patching-cost is very small, the basic model is sufficient.

3)

What can vendor do about it? Certainly vendor may adjust patching time in response, but the study will show later that it has limited impact on diffusion of patching. One may argue that vendor may improve patch quality to encourage

27

fast diffusion of patching. The reasoning is two-fold: first, high-quality patch contains fewer bugs therefore safer; second, high-quality patch improves user-friendliness. Improved quality may mean ease to install, smaller size to download or better customer support. Consider that the most recent service pack of Windows 2000 Server, which is as large as 27.4 MB and takes a customer an estimated 70 minutes to download through phone connection. Large size may well be related to the fact that many Windows home users do not apply patch. A difficult situation of this research is that the above questions can not be fully answered by existing empirical research. Anecdotal evidences shed some light on these questions, though. First, the current legal environment does not impose legal liability on vendors. It is conceivable that vendors are less concerned about post-patching-cost therefore take less market liability if any. The direct result is that vendors opt for quicker patch instead of high-performance patch. The Microsoft SP4 is such an example. Second, signs show that many commercial software users are less likely being exploited after patch is available. If post-patching-cost is very small or it has no or very little impact on vendor’s decision, the basic model seems to be sufficient. The following discussion will base on the assumption that post-patching-cost and its impact on vendor’s cost func tion is non-trivial effect. We assume that vendor can affect the rate of patching by making patch of good quality.

28

Patch of high quality increases customers’ incentive to patch quickly and therefore reduces loss to customers. Vendor has two decision variables: patching time τ and quality of patchq . Now vendor’s cost function has the third component: loss to customers after patch is available.

~~ V (τ , q ) = C P (τ , q ) + λ.θ (τ , T ) + λ .θ (τ , q , T )

(5.1)

Here C P (τ , q) is patching cost to vendors. We allow it to increase in patch quality q.

~ ~ θ (τ , q , T ) is the corresponding post-patching loss to customers, and λ is the proportion affected by ~

vendor. λ is used other than λ because vendors take less liability for loss to customers after the patch is available.

~ Now we formulate the structure ofθ . Let t be the time window from when patch is available. p (t , q ) is the proportion of customers who apply patch at or before t. We allow p t ( t , q) > 0 so that more people apply patch while patch is know to more people. Also we allow p q ( t , q ) > 0 so that higher quality of patch leads to quicker diffusion of patching. Note that here t is the time window of customers exposed to attacks. Recall the timeline in figure 3.1, if this vulnerability is discovered by attacker at calendar time s and a customer apply patch at calendar time x, we have t=x-s; similarly, if the vulnerability is first ~ disclosed by social planner prior to τ , then t=x-T. At this stage, we may define θ (τ , q , T ) :

~ θ (τ , q, T ) =  τ ∞ (1 − p( x − s, q ) )dD( x − s ) dF ( s : t ) + (1 − F (τ : t ) ∞ (1 − p( x − τ , q) )dD( x − τ ), when τ < T 0 0 ∫  ∫0 ∫τ τ  Τ ∞ ∞  ∫ ∫ (1 − p( x − s, q ) )dD( x − s ) dF ( s : t 0 ) + (1 − F (T : t 0 ) ∫ (1 − p ( x − T , q) )dD( x − T ) when τ ≥ T 0 τ τ

(5.2)

29

If attacker finds the vulnerability at s and s < τ , 1 − p( x − s , q ) is the proportion of customers that have not patched at time x. If social planner disclosed the vulnerability first, 1 − p( x − T , q ) is the proportion of customers that have not patched at time t. Note the

~ similarity betweenθ (τ , q , T ) and θ (τ ,T ) . The difference lies in that before the availability of patch every customer is subject to risk exposure.

We assume that V is convex in (τ , q ) so that there exists an interior point (τ * , q * ) , which minimize vendor’s cost. 17 In general, convexity is hard to show, but many common functions satisfy convexity requirement. Now the model is well established, we show what implications this model provide. First, consider q as constant. Evidences show that this assumption applies to many cases. We found that vendor would take longer time to develop patch if it is more liable for the patching cost. This may be counter-intuitive at first glance. This implication suggests that it may not be a good idea to align legal liability for post-patching cost.

Proposition 5.1: When q is a constant,

dτ * ~ >0 dλ

(5-3)

∂ 2V (Proof is simple. Note that ~ < 0.) ∂τ∂λ

We then allow vendors to adjust patch quality. We first investigate whether (τ * , q * ) are strategic complements or not. Here strategic complements mean that when vendor improves quality of patch, one may expect slower patch, which is equivalent to say

17

∂ 2V <0 ∂τ∂q

To be sufficient, an interior minimum also requires the value of the corresponding Hessien matrix to be greater than 0.

30

at (τ * , q * ) .

~ ∂ 2V ∂ 2 C P ~ ∂ 2θ = +λ ∂τ∂q ∂τ∂q ∂τ∂q It is intuitive to assume

(5-4)

∂ 2CP <0. However, the second component is positive. The ∂τ∂q

intuition is as follows: decrement in quality makes the patch hard to use and customers incur more post-patching cost. Meanwhile, decrement in patch-developing time increases time window of post-patching and thereby customers incur more loss. In general, we do not know the sign of

∂ 2V ∂τ∂q

without assuming explicit functional form. Our

∂ 2V study shows that when the sign <0, the signs of some important marginal product, such ∂τ∂q as

dτ * dτ * dτ * , and are undetermined. dλ dT dt 0 . We assume that (τ * , q * ) are strategic complements for tractability reason. Note that if

~ ∂ 2CP ∂ 2θ is dominant over , (τ * , q * ) are strategic complements no matter what the sign ∂τ∂q ∂τ∂q of

∂ 2CP is. ∂τ∂q Similarly as in the basic model, we prove in appendix 4 that the following properties

are true for vendor’s optimal decision with consideration of diffusion of patching: dτ * dτ * dτ * < 0, > 0 and <0 dλ dT dt 0 We also show how vendor react to change in the above factors by adjusting patch quality.

31

We have

dq * dq * dq * < 0, > 0 and <0 dλ dT dt 0

Additionally, we found that, when vendor is held liable for post-patching cost, it chooses to patch later in exchange for higher quality. Proposition 5.2:

dτ * dq * > 0 and ~ ~ >0. dλ dλ

6. Conclusions In this paper, we presented an economic model to study disclosure policy. We first studied vendor’s reaction to disclosure policy T. We showed that social planner can push vendor to patch earlier by reducing T. Increasing vendor’s liability is also an effective way of urging vendor to patch earlier. We found that when vulnerability is discovered late, since attackers have more experience and may find the vulnerability quicker and cause more loss to vendor, vendor needs a quicker patch. Then we studied how optimal disclosure policy should adjust when related factors change. We found that when liability factor increases, optimal T should be set less. When vulnerability is found earlier, optimal T should be set less, too. We showed that neither instant disclosure nor secrecy policy could consistent ly beat the other. Rather, optimal disclosure time should vary in types of software. We showed that uncertainty in patching time incurred more cost on vendor, and vendor had to reduce mean patching time in response. Social planner should also tighten disclosure policy to T in this regard. We also investigate the impact of diffusion of patching on vendor and social planner’s decision and find that properties of the basic model are well reserved.

32

References William A. Arbaugh, William L. Fithen, and John McHugh, 2000, "Windows of Vulnerability: A Case Study Analysis" , (IEEE) Computer.

Ashish Arora, Jonathan P. Caulkins, Rahul Telang (February 2003), “Provision of Software Quality in the Presence of Patching Technology,” Carnegie Mellon University, working paper

R. Banker, G.Davis, and S. Slaughter (1998), “Software Development Practices, Software Engineering Complexities, and Software Maintenance”, Management Science, 44:4, 433-450

Steve Beattie, Seth Arnold, Crispin Cowan, Perry Wagle (2002), and Chris Wright, “Timing the Application of Security Patches for Optimal Uptime”, Proceedings of LISA ’02: Sixteenth Systems Administration Conference

Hilary Browne, William A. Arbaugh, John McHugh, and William L. Fithen, 2001,"A Trend Analysis of Exploitations". IEEE Symposium on Security and Privacy. Oakland, California, USA. http://www.cs.umd.edu/~waa/pubs/CS-TR-4200.pdf.

33

Wenliang Du and A.P. Mathur (1998), “Categorization of Software Errors that led to Security Breaches”, 21ST NATIONAL INFORMATION SYSTEMS SECURITY CONFERENCE, CRYSTAL CITY, VA

John Howard (1998), “An Analysis of Security Incidents On the Internet,” thesis, http://www.cert.org/research/JHThesis/Word6/

M.S. Krishnan, C.H. Kriebel, S. Kekre, and T. Mukhopadhyay (2000), “An Empirical Analysis of Productivity and Quality in Software Products”, Management Science, 46(6): 745-59 Krsul, I., Spa_ord, E. & Tripunitara, M. (1998). Computer vulnerability analysis, purdue University. Howard Lipson, 2002, “Tracking and Tracing Cyber-Attacks:Technical Challengesand Global Policy Issues”, CERT/CC special report

Tim Polk (1993), “Automated Tools for Testing Computer System Vulnerability”, Technical Report NIST SP 800-6, National Institute of Standards and Technology

Ethan Preston, John McHugh, 2002, "Computer security publications: information economics, shifting liability and the first amendment", 24 Whittier Law Review.

34

Tim Shimeall, Phil Williams, 2002, “Models of Information Security Trend Analysis”, CERT/CC Hal R. Varian, 2000, “Managing Online Security Risks,” The New York Times, http://www.nyt imes.com/library/financial/columns/060100econ-scene.html

CERT Technical report, “Overview of Attack Trends”, .http://www.cert.org/archive/pdf/attack_trends.pdf Symantec

Inc.,

2003,

“Symantec

Internet

Security

Threat

Report”.

http://www.symantec.com NetworkMagazine.com, 2000, “The Pros and Cons of Posting Vulnerabilities”. http://www.networkmagazine.com/article/NMG20001003S0001

35

Appendix 1: Study of Vendor and Social Planner’s Decision Game In Section 3.3, we pointed out that the game between vendor and social planner can be led by either one or played simultaneously. To study the se games, we show their reaction given the move of the other. First, if vendor leads, for any τ , social planner’s best reaction is T * = τ , for T * < τ is not optimal because customers incur more loss while T * has no effect onτ ; T * > τ is not optimal either because after the availability of patch, social needs not to keep it a secret, on the contrary, social planner should let the information kno wn to customers a.s.a.p. Second, if social planner leads, for any T < τ S , τ * < T .18 Hence we have the following figure. From the figure, we know that in a simultaneous game, both players choose at (τ S ,τ S ) . In a vendor- led Stackelberg game, vendor patches at as late as possible, therefore τ S , and social planner would follow suit. Hence, the equilibrium is at (τ S ,τ S ) .

.

18

Please refer to appendix 2 for proof.

36

Figure: Social planner and vendor’s reaction function

Appendix 2: Proofs for Section 3 Proposition 3.1: For any T < τ S ,

dτ * >0 dT

Proof: ∂V = 0 is the F.O.C of vendor’s optimal decision given T. ∂τ

Differentiate both sides with respect to T: ∂ 2V dτ ∂ 2V + =0 ∂τ 2 dT ∂τ∂T



dτ = dT



∂ 2V ∂τ∂T ∂ 2V ∂τ 2

∂ 2V = λ ( F (T ) − 1)D′′(τ − s ) < 0 ∂τ∂T ⇒

dτ * >0 dT

Proposition 3.2: For any T < τ S , vendor’s optimal patching time τ * is bounded such that T < τ * < τ s Proof: From proposition 1, we know that τ * increases in T. Since τ s is the optimal patching time whenT = ∞ , we have τ * ≤ τ s and for any T < τ S , and the strict inequality holds. For T = ∞ , τ * = τ S by definition.

37

Now we show that τ * > T . Suppose that τ * ≤ T , recall equation (3.2) that whenτ * ≤ T , loss to customers is the same as that under secrecy policy when T = ∞ . Immediately, vendor cost function is the same as that of secrecy policy. Hence, τ * = τ S ,which contradicts with the

strict equality result. dτ * Proposition 3.3: Given T, we have <0 dλ Proof: Also start with

∂V = 0. ∂τ

Differentiate both sides with respect to λ : ∂ 2V dτ ∂ 2V + = 0. ∂τ 2 dλ ∂τ∂λ ∂ 2V dτ ⇒ = ∂2τ∂λ dλ ∂V ∂τ 2 −

∂ 2V = ∂τ∂λ ⇒

T



0

D ′(τ − s ) dF ( s : t 0 ) + (1 − F (T : t 0 ) )D′(τ − T ) > 0

dτ * <0 dλ

Proposition 3.4: All held same, we have

dτ * < 0. dt 0

As we mentioned before, for products that have been on the market and with which potential hackers have accumulated experience, attackers find this vulnerability quicker. To capture this characteristic, we use the following assumption: Assumption: If t 0 > ~ t 0 , all else held constant, we have s p F. S . D ~ s.

38

s p F . S .D ~ s denotes ~ s first-order stochastic dominates s, implying that ~ s has larger mean than s ( E (s) < E (~ s ) ). since s and ~ s corresponds to t 0 and ~ t0 , respectively, the above assumption says that

~

all else held constant, for a later-discovered vulnerability (t 0 > t0 ), attackers gather more experience about the product, therefore find the vulnerability quicker ( E ( s ) < E ( ~ s ) ).

Proof: T ∂V ∂CP = + λ. ∫ (D ′(τ − s ) − D′(τ − T ) )dF ( s : t 0 ) + D ′(τ − T )   0  ∂τ ∂τ

D ′′( x ) > 0 ⇒ D ′(τ − s ) − D ′(τ − T ) is monotonically decreasing in s.

Since s p F . S. D ~s ,

according

to

first-order

stochastic

dominance

theorem,

we

∂V (τ : t 0 ) ∂V (τ : ~t0 ) have: > ∂τ ∂τ Suppose that τ * makes

Then

∂V (τ * : t 0 ) ∂V (τ~* : ~ t0 ) = 0 and τ~ * makes =0 ∂τ ∂τ

∂V (τ * : ~t0 ) ∂V (τ * : t 0 ) < =0 ∂τ ∂τ

Since V is convex in τ , τ~ * > τ * i.e.

dτ * < 0. dt 0

Proposition 3.5:

dT <0 dλ

Proof:

39

G (τ , T , λ ) =

∂S ∂τ ∂S + = 0 is the F.O.C of social planner’s optimal decision on T. ∂τ ∂T ∂T

Differentiate both sides with respect to λ :

∂G  ∂τ dT ∂τ  ∂G dT ∂G . . + + =0 + ∂τ  ∂T dλ ∂λ  ∂T dλ ∂λ

 ∂G ∂τ ∂G  dT ∂G ∂τ ∂G ⇒ + + + =0   ∂ τ ∂ T ∂ T  dλ ∂ τ ∂ λ ∂ λ

d 2 S dT ∂G ∂τ ∂G ⇒ + + =0 dT 2 dλ ∂τ ∂λ ∂λ ∂ 2V ∂τ Recall that = ∂2τ∂T . ∂T ∂V ∂τ 2 −

∂ 2V 2 2 2 2 ∂G ∂ S ∂τ ∂ S ∂ S ∂τ∂T + ∂ S = 2. + = . ∂τ ∂τ ∂T ∂τ∂T ∂τ 2 ∂ 2V ∂τ∂T 2 ∂τ 2 ∂ S − 2 ∂ S ∂2S = λ. 2 . ∂2τ∂T + ∂τ ∂V ∂τ∂T 2 ∂τ −

∂ 2S ″ ∂τ 2 = λC P + λθ ′′ < 1 ″ ∂ 2V CP + λθ ′′ ∂τ 2

λ.



∂2S = (1 − F (T )).D′′(τ − T ) > 0 ∂τ∂T



∂G <0 ∂τ

We also have that

∂G ∂S ∂ 2τ = . ∂λ ∂τ ∂T∂λ

40

∂ 2 S ∂ 2V ∂ 2 S  ∂ 2θ  ∂ 2 S ∂ 2V ∂ 2S   . + . λ . − . + ∂T∂τ ∂τ 2 ∂T∂τ  ∂τ 2  ∂T∂τ ∂τ 2 ∂T∂τ ∂ 2τ = > ∂T∂λ ∂ 2V ∂ 2V ∂τ 2 ∂τ 2 ∂G Hence, >0 ∂λ −

 ∂ 2V . 2  ∂τ

  =0

d 2S ∂S ∂τ Note that > 0 by definition, > 0 and < 0. 2 dT ∂τ ∂λ

Hence, we have that

dT < 0. dλ

Before proof of proposition 3.6, we give proof for the following lemma. Lemma 1: For any m, we have

∂F ( m : t 0 ) >0 ∂t 0

Proof: From

previous

assumption,

we

have

that:

If t 0 > ~ t 0 , all else held constant, we have s p F. S . D ~ s. Hence, for any m, we have Probabilit y( s > m ) < Probabilit y( ~s > m) . i.e. F ( m : t 0 ) > F ( m : ~t0 ) Hence, we have

∂F ( m : t 0 ) > 0. ∂t 0

Proposition 3.6:

dT <0 dt 0

Proof:

41

G (τ , T , λ ) =

∂S ∂τ ∂S + = 0 is the F.O.C of social planner’s optimal decision on T. ∂τ ∂T ∂T

Differentiate both sides with respect to t 0 :

∂G  ∂τ dT ∂τ  ∂G dT ∂G + . . + + =0 ∂τ  ∂T dt 0 ∂t 0  ∂T dt 0 ∂t 0

 ∂G ∂τ ∂G  dT ∂G ∂τ ∂G ⇒ + + + =0   ∂τ ∂T ∂T  dt 0 ∂τ ∂t 0 ∂t 0 ⇒

d 2 S dT ∂G ∂τ ∂G + + =0 dT 2 dt 0 ∂τ ∂t 0 ∂t 0

From proof of proposition 3.5, we know that

∂G < 0. ∂τ

From proof of proposition 3.4, we know that

∂τ < 0. ∂t 0

Hence, we only need to prove that

∂G > 0. ∂t 0

∂G ∂ 2 S ∂τ ∂2S ∂S ∂ 2τ = . + + . ∂t 0 ∂τ∂t 0 ∂T ∂T∂t 0 ∂τ ∂T∂t 0 ∂ 2S 1) First of all, we have >0 ∂T∂t 0 ∂(( F (T : t 0 ) − 1) D′(τ − T ) ) ∂F (T : t 0 ) ∂ 2S = = .D′(τ − T ) ∂T∂t 0 ∂t 0 ∂t 0 From Lemma 1, we have

2)

∂ 2S >0. ∂T∂t 0

∂ 2 S ∂τ ∂ 2 S λ .(1 − F (T : t 0 ) D ′′(τ − T ) . = . ∂τ∂t 0 ∂T ∂τ∂t 0 ∂ 2V ∂τ 2

∂S is monotonically decreasing in s. ∂τ

42

~

∂ 2 S (t 0 ) ∂ 2 S ( t0 ) ~ Since for any t 0 > t0 , we know s p F . S .D ~ s, > ∂τ∂t 0

∂τ∂t 0

∂ 2 S (t 0 ) i.e. >0 ∂τ∂t 0 ∂ 2 S ∂τ Hence, we have . >0. ∂τ∂t 0 ∂T 3)

∂S ∂ 2τ ∂S D′′(τ − T ) ∂F (T : t 0 ) . =− . . <0 ∂τ ∂T∂t 0 ∂τ ∂ 2V ∂t 0 ∂τ 2

Hence, we only need to prove that Let K =

∂ 2 S ∂τ ∂S ∂ 2τ . + . >0. ∂τ∂t 0 ∂T ∂τ ∂T∂t0

∂S .F (T : t 0 ) ∂τ

∂ 2 S ∂τ ∂S ∂ 2τ λD ′′(τ − T )  ∂ 2 S ∂K    . + . = − 2 ∂τ∂t 0 ∂T ∂τ ∂T∂t 0 ∂V ∂ τ ∂ t ∂ t  0 0  ∂τ 2 ∂K ∂ 2S < ∂t 0 ∂τ∂t 0 ∂ 2 S ∂τ ∂S ∂ 2τ Hence, . + . >0. ∂τ∂t 0 ∂T ∂τ ∂T∂t0 Therefore, we have



∂G > 0. ∂t 0

dT <0 dt 0

43

Appendix 3: Proofs for Section 4 Proposition 4.1:

dµ >0 dT

Proof: ∂V = 0 is the F.O.C of vendor’s optimal decision given T. ∂µ Differentiate both sides with respect to T: ∂ 2V dµ ∂ 2V + =0 ∂µ 2 dT ∂µ∂T



dµ = dT



∂ 2V ∂µ∂T ∂ 2V ∂µ 2

By assumption, V is convex, i.e.

∂ 2V >0. ∂µ 2

T τ e T V = CP (τ ) + λ. ∫0 ∫0 D(τ − s ) dF ( s : t 0 ) dΦ (τ , µ ) + ∫T  ∫0 D(τ − s ) dF ( s : t 0 ) + (1 − F (T : t 0 ) D(τ − T ) dΦ(τ , µ   

Integrate by parts: e ∂V = λ. ∫ ( F (T : t 0 ) − 1) D (τ − T ) dΦ (τ , µ )  < 0 T  ∂T And it is decreasing in τ . According to F.S.D assumption, if µ1 < µ 2 , thenτ 1 p F .S . D τ 2 .



∂V ∂V ∂ 2V | µ = µ1 > | µ = µ 2 i.e. <0 ∂T ∂T ∂µ∂T



dµ >0 dT

44

Proposition 4.2: Given T, we have

dµ < 0. dλ

Proof: ∂V = 0 is the F.O.C of vendor’s optimal decision given T. ∂µ Differentiate both sides with respect to T: ∂ 2V dµ ∂ 2V + =0 ∂µ 2 dλ ∂µ∂λ ∂ 2V − dµ ∂µ∂λ ⇒ = dλ ∂ 2V ∂µ 2

By assumption, V is convex, i.e.

∂ 2V >0. ∂µ 2

T τ e T V = CP (τ ) + λ. ∫0 ∫0 D(τ − s ) dF ( s : t 0 ) dΦ(τ , µ ) + ∫T  ∫0 D(τ − s ) dF ( s : t 0 ) + (1 − F (T : t 0 ) D(τ − T ) dΦ (τ , µ    T τ e T ∂V = ∫ ∫ D (τ − s )dF ( s : t 0 )dΦ (τ , µ ) + ∫  ∫ D (τ − s )dF ( s : t 0 ) + (1 − F (T : t 0 ) D (τ − T ) dΦ (τ , µ ) 0 0 T 0  ∂λ ∂V is increasing in τ . ∂λ

∂ 2V Similarly as in proof of proposition 2, we have that >0 ∂µ∂λ ⇒

dµ < 0 19 dλ

Proposition 4.1: If we have

∂V dµ * is convex in µ , < 0. ∂µ dσ

19

Since proofs for proposition 4 to 6 under assumption of uncertainty are similar as those of deterministic case, for reason of length, we skip the proofs.

45

As widely used in other research about uncertainty, second-order stochastic dominance is a good measure of variation. Hence, we use the following assumption: S.O.S.D Assumption: If σ < σ~, all else held constant, we have τ f S. S . D τ~. Note that whenτ f S .S . D τ~ , σ < σ~ is implied. Proof: T τ e T V = C P ( µ ) + λ. ∫ ∫ D (τ − s )dF ( s : t 0 ) dΦ (τ , µ ) + ∫  ∫ D (τ − s) dF ( s : t 0 ) + (1 − F (T : t 0 ) D(τ − T ) d Φ (τ , µ )  0 0 T 0    

According to assumption above, whenσ < σ~ , we have τ f S. S .D τ~ . Hence,

according

second-order

stochastic

dominance

theorem,

we

have

dV (τ : µ ,σ ) dV (τ : µ ,σ~) < dµ dµ (Note that

dV must be convex here.) dµ

~ * be optimal mean patching times corresponding to σ and σ~ , respectively. Let µ * and µ

dV (τ : µ * ,σ~) dV (τ : µ * , σ ) > =0 dµ dµ As assumed,

dV is convex in µ , therefore it is increasing in µ . dµ

Hence, to decrease

dV (τ : µ * , σ~) ~ * , µ must decrease. dµ

* ~* , i.e. dµ < 0 . Therefore, forσ < σ~ , µ * > µ



Proposition 4.2: If we have

∂V dT is convex in µ , < 0. ∂µ dσ

46

Proof: G( µ , T , σ ) =

∂S ∂µ ∂S + = 0 is the F.O.C of social planner’s optimal decision on T. ∂µ ∂T ∂T

Differentiate both sides with respect to σ : ∂G  ∂µ dT ∂µ  ∂G dT ∂G . . + + =0 + ∂µ  ∂T dσ ∂σ  ∂T dσ ∂σ

 ∂G ∂µ ∂G  dT ∂G ∂µ ∂G ⇒  +  + + =0  ∂µ ∂T ∂T  dσ ∂µ ∂σ ∂σ



d 2 S dT ∂G ∂µ ∂G + + =0 dT 2 dσ ∂µ ∂σ ∂σ

Similarly as in proof of proposition 5, we know that

∂µ < 0. ∂σ

From proof of proposition 7, we know that Hence, we only need to prove that

∂G < 0. ∂µ

∂G > 0. ∂σ

∂G ∂ 2 S ∂µ ∂ 2S ∂S ∂ 2 µ = . + + . ∂σ ∂µ∂σ ∂T ∂T∂σ ∂µ ∂T∂σ 1)

First of all, according to second-order stochastic dominance theorem, we have

∂2S >0 ∂µ∂σ e

2)

∂ ∫ ( F (T ) − 1) D ′(τ − T ) dΦ (τ : σ ) ∂ 2S =. 0 <0 ∂T∂σ ∂σ

∂ ∫0 D′′(τ − T ) dΦ (τ : σ ) e

3)

∂S ∂ 2 µ ∂S . = . ∂µ ∂T∂σ ∂µ

λ.(1 − F (T ) )

∂V ∂τ 2 2

∂σ

>0

47

Hence, we only need to prove that

∂ 2 S ∂µ ∂2 S . + >0 ∂µ ∂σ ∂T ∂T∂σ

e  2 e  ∂ ∫ D ′(τ − T ) dΦ (τ : σ ) ∂ 2 S ∂µ ∂2 S ∂µ  ∂ ∫0 D (τ − s )dF ( s )d Φ (τ : µ , σ )  0 . + > +   + (1 − F (T )). ∂µ ∂σ ∂T ∂T∂σ ∂T  ∂µ∂σ ∂σ    e

∂ ∫ ( F (T ) − 1) D ′(τ − T ) dΦ(τ : σ ) 0

∂σ ∂G Hence, we have > 0. ∂σ dT Therefore, we have < 0. dσ

>0

Appendix 4: Proofs for Section 5 Proofs for the following properties:

dτ * dτ * dτ * < 0, > 0 and <0 dλ dT dt 0

dq * dq * dq * < 0, > 0 and <0 dλ dT dt 0 To avoid redundancy due to the similarity in proofs, we only show

dτ * dq * > 0 and >0. dT dT

Proof: We start with vendor’s first order optimization condition:

∂V =0 ∂τ ∂V =0 ∂q Taking the total derivative of both equations

∂ 2V ∂ 2V ∂ 2V d τ + dq = − dT ∂τ∂q ∂τ∂T ∂τ 2 ∂ 2V ∂ 2V ∂ 2V dτ + 2 dq = − dT ∂τ∂q ∂q ∂q∂T

48

By Crammer Rule,

dτ = dT



∂ 2V ∂τ∂T

∂ 2V ∂τ∂q



∂ 2V ∂ q∂ T

∂ 2V ∂q 2

H (τ , q)

By assumption, the determinant to Hessian matrix H (τ , q) is positive.

~ ∂ 2θ ~ Note that = λ ( F (T ) − 1).( pt (t − T , q ).D′(τ − T ) + p (t − T , q ) D′′(τ − T ) ) < 0 ∂τ∂T Hence, -

∂ 2V >0 ∂τ∂T

~ ∞ ∂ 2V ~ ∂ 2θ ~ Also =λ = λ ( F (T ) − 1).∫ p q (t − T , q) D' ' (t − T )dt < 0 τ ∂q∂T ∂q∂T Since we assumed (τ , q) are strategic complements,

− Hence,

∂ 2V ∂τ∂T

∂ 2V − ∂q∂T

∂ 2V ∂τ∂q ∂ 2V ∂q 2

∂ 2V <0 ∂τ∂q

>0

Note that strategic complement assumption is only sufficient but not necessary. Therefore,

dτ * > 0. dT

dq * Similarly, we have = dT

Proposition 5.2:

∂ 2V ∂ 2V − ∂τ 2 ∂τ∂T 2 2 ∂V ∂V − ∂ q∂ τ ∂ q∂ T H (τ , q )

>0

dτ * dq * > 0 and ~ ~ >0. dλ dλ

49

Proof:

Similarly as proof for proposition 3.3, we only need to prove that ∂ 2V ∂ 2V < 0 and ~ ~ <0 ∂τ∂λ ∂q∂λ ~ T ∂ 2V ∂ 2θ ~ = ~ = ∫0 ( p (τ − s , q) − 1)D ' (τ − s ) dF ( s ) + (1 − F (T ) )( p (τ − T , q ) − 1)D ' (τ − T ) < 0 ∂τ∂λ ∂τ∂λ

~ ∂ 2V ∂ 2θ ~= ~= ∂q∂λ ∂q∂λ

∫ ∫ (− p T

0



τ

(τ − s , q) )D(τ − s ) dF ( s ) + (1 − F (T ) )∫ (− p q (τ − T , q) )D(τ − T ) < 0 ∞

q

τ

Q.E.D

50

Timing Disclosure of Software Vulnerability for ...

Timing Disclosure of Software Vulnerability for Optimal Social Welfare .... when vulnerability is discovered late in the software lifecycle or vendor is held for more.

150KB Sizes 2 Downloads 115 Views

Recommend Documents

Optimal Policy for Software Vulnerability Disclosure
According to Symantec Internet Security Threat Report (2003), each ... But many believe that disclosure of vulnerabilities, especially without a good patch is ..... It is easy to see that the first two games lead to rather trivial outcomes (See ....

Timing for Animation
book shows just how essential principles like timing are for the art of animation. Preface ... My co-author, who has drawn the majority of illustrations in this book, ...

Timing for Animation
technology, I quickly learned more about animation and the movement of objects than ever before. ... With computer animation software, anyone can make an object move. .... Neither time nor money is spared on animation. ...... Sometimes an effect does

financial disclosure
Oct 3, 2010 - to cash (6%), this fund is comprised of insurance company contracts .... Apple Ipad - a gift celebrating my departure as President and CEO of ...

financial disclosure
Oct 3, 2010 - to the best ofmvknowledge. • n~t..~ T>mr ... Examples IDoe_Jone~ ~SE1ith,_H~m:tow'1;, Sta~e__ ... Federal Reserve Bank of San Francisco. 3.

Understanding Sequential Circuit Timing 1 Timing ... - CiteSeerX
Perhaps the two most distinguishing characteristics of a computer are its ... As we have discussed this term, edge-trigged flip flops, such as the D flip flop, are ...

AUTHORIZATION FOR USE AND/OR DISCLOSURE OF INFORMATION
The use and distribution of this form is limited to employees of public school agencies within the North Region Special Education Local Plan Area (SELPA).

AUTHORIZATION FOR USE AND/OR DISCLOSURE OF INFORMATION
MEDICAL/EDUCATIONAL INFORMATION AS DESCRIBED BELOW ... a student record under the Family Educational Rights and Privacy Act (FERPA). Health Info: I understand that authorizing the disclosure of health information is voluntary.

6A5 Prediction Capabilities of Vulnerability Discovery Models
Vulnerability Discovery Models (VDMs) have been proposed to model ... static metrics or software reliability growth models (SRGMS) are available. ..... 70%. 80%. 90%. 100%. Percentage of Elapsed Calendar Time. E rro r in. E s tim a tio n.

Hidden actions and preferences for timing of resolution ...
Theoretical Economics 10 (2015), 489–541. 1555-7561/20150489. Hidden ..... schools for student 1 and she is assigned to her highest ranked school from this feasible set. The feasible sets for .... the topology where a net of signed measures {μd}dâ

Selection of airgap layers for circuit timing optimization
Selection of Airgap Layers for Circuit Timing Optimization. Daijoon Hyun§‡ and ... CCC code: 0277-786X/17/$18 · doi: 10.1117/12.2258034. Proc. of SPIE Vol.

IndAS-20_Accounting for Government Grants and Disclosure of ...
an economic benefit specific to an entity or range of entities ... activities of the entity. ... for Government Grants and Disclosure of Government Assistance.pdf.

Timing-Driven Placement for Hierarchical ...
101 Innovation Drive. San Jose, CA ... Permission to make digital or hard copies of all or part of this work for personal or ... simulated annealing as a tool for timing-driven placement. In the .... example only, the interested reader can refer to t

Selection of airgap layers for circuit timing optimization
J. Hwang, J. Seo, Y. Lee, S. Park, and J. Leem, “A middle-1X nm NAND flash memory cell (M1X-NAND) with highly manufacturable integration technologies,” in ...

Recommendations to IIC's Disclosure of Informaton Policy.pdf ...
financial institution making use of any public money. In accordance with international law and. norms, the fact that corporations are obligated to conduct their ...