Cross-fertilization between safety and security engineering L.Pi`etre-Cambac´ed`esa,∗, M.Bouissoub,a a Electricit´ ´ e b Ecole ´

de France, 1 Av. G´ en´ eral de Gaulle, 92141 Clamart, France Centrale Paris, Grande Voie des Vignes, 92295 Chˆ atenay-Malabry, France

Abstract The purpose of this paper is to give a comprehensive view of methods, models, tools and techniques that have been created in safety engineering and transposed to security engineering, or vice-versa. Since the concepts of safety and security can somewhat vary according to the context, the first section of the paper deals with the scope and definitions that will be used in the sequel. The similarities and differences between the two domains are analyzed. A careful screening of the literature (this paper contains 201 references) made it possible to identify cross- fertilizations in various fields such as architectural concepts (e.g., defense in depth, security or safety kernels), graphical formalisms (e.g. attack trees), structured risk analyses or fault tolerance and prevention techniques. Keywords: risk, safety, security, cross-fertilization

1. Introduction Safety and security have developed as two distinct disciplines for many years, led by partitioned communities each developing their own tools and methodologies [1, 2]. However, if safety and security are indeed distinct issues and should not be merged, they are also closely related and share many commonalities. As a matter of fact, the tools from one domain can in many cases be adapted in the other. This observation was already made in the early 90s, for example by Jonsson [3] and Brewer [4], but bridges between communities take a long time to build and even today, communities are still very segmented [5]. This paper aims at underlining the potential for cross-fertilization between safety and security engineering by reviewing the related initiatives and results in the scientific literature and industry practices. Some of them have already gained wide popularity. It is the case for instance of attack trees in security [6], adapted from fault trees used in safety studies, or the intrusion-tolerance paradigm [7, 8], inspired by fault-tolerance approaches, historically adopted first for safety systems. Other works are less known, despite their interest; many more are still to come, as the potential for reciprocal inspiration between safety and security is still substantial. This paper is structured as follows. Section 2 clarifies the distinction made between safety and security in this review, and states inherent limits of the chosen scope. As a fundamental basis for cross-fertilizations, Section 3 discusses general differences and similarities between safety ∗ Corresponding

author Email addresses: [email protected] (L.Pi` etre-Cambac´ ed` es), [email protected] (M.Bouissou) Preprint submitted to Elsevier

and security. Section 4 reviews techniques, tools and methods coming from the safety area, and having been adapted to security. The reciprocal review is made in Section 5, focusing on security-inspired approaches in the safety domain. Section 6 provides an example of mutual inspiration, illustrating that cross-fertilization is of course not a strict one-way process; it then gives a summary of the identified reciprocal inspirations throughout the paper. Perspectives are finally discussed in Section 7. Section 8 concludes the paper. 2. Scope and definitions 2.1. safety and security in the context of this review The terms “safety” and “security” have varying meanings depending on the context and the technical communities. They differ for instance substantially between an electrical engineer, a computer scientist or a nuclear expert; they can even swap in some cases (e.g., nuclear security [9] vs. electrical security [10]). In fact, there are no absolute definitions for such concepts. Thus, before discussing similarities and differences between safety and security, and cross-fertilizations, these two concepts have to be clarified in the frame of this paper. In this objective, we make use of the SEMA referential framework, described in [11]. It does not aim at replacing the terms safety and security, but rather to make their meanings and respective limits explicit in a given context, in order to avoid misunderstandings. With this objective, safety and security are graphically mapped on a conceptual grid representing the two most common distinctions between safety and security found in the literature. The first axis distinguishes between the accidental and malicious nature of the threats or events giving birth to the considered risks; the second November 7, 2012

axis differentiates safety and security depending on their origin and consequences. This second axis separates risks originating from the environment (i.e. assets, goods and people as well as the surrounding natural world) and impacting the system, from those coming from the system and impacting the environment. A system-to-system dimension is added to complete the coverage. As shown in Fig. 1, in our paper:

levels. On one hand, security is considered as a multidimensional concept, like dependability, covering dependability attributes (integrity and availability) to which confidentiality is added. On the other hand, safety is considered as a simple attribute of dependability, at the same level as reliability, availability or integrity. This seems questionable as safety is of a different nature, intrinsically systemic, than these attributes: if it partially relies on them, it emerges only at a global scale [13], like security. In contrast, the IEC defines dependability as the “collective term used to describe the availability performance and its influencing factors: reliability performance, maintainability performance and maintenance support performance” [14]. This definition has been preferred for our investigations. Nevertheless, even with this definition, safety and dependability are still closely linked, as system safety depends on the dependability of the safety functions and its supporting entities. Similarly, system security depends on the dependability of security functions and their related supports. In this paper, we have included dependability techniques and tools as safety ones if they are often encountered in safety studies and despite the fact that some of them may be used for dependability studies without safety-related aspects. Finally, it should be noted that a large part of the concepts and the taxonomy developed by Laprie et al. [12] is still relevant with respect to safety and security as we have defined them earlier. This is for instance the case of the concepts of faults, corresponding to the causes of errors, which are themselves system states susceptible to lead to failures, effective deviation in the expected service. Each of these notions is thoroughly categorized in [12], safety and security being reflected in the distinction made between malicious and non-malicious faults.

• security is related to risks originating from or exacerbated by malicious intent, independently from the nature of the related consequence, • whereas safety addresses accidental ones, i.e. without malicious intent, but with potential impacts on the system environment. Again, this does not pretend to define safety and security in absolute terms but only in the context of this research. System ⇒Env.

System ⇒System

Accidental Malicious

Env. ⇒System

Security, in the frame of this article

Safety, in the frame of this article

Figure 1: Safety and security in the SEMA referential framework

Finally, in the frame of this review, we focus on cybersecurity aspects rather than on physical security. This is both because cybersecurity is the field of expertise of the authors, and because of the relative lack of scientific literature dealing with physical security. As a consequence, the term security refers in fact to cybersecurity in most cases, except when explicitly stated. Nevertheless, numerous statements and findings of this paper could be extended to physical security.

2.3. Other vocabulary issues The terminological difficulties attached to the terms safety and security mentioned in Section 2.1 are discussed more in depth in [11]. Since the associated communities have developed separately, they have also developed their own terminology, which sometimes varies according to the industrial sector, despite being used to describe very similar, and in some cases identical, concepts. This can be seen in particular in the definitions of the concept of risk in Table 1. For example, the field of security prefers to use the term threat, whereas the field of safety tends to use hazard to describe identical concepts in several standards [2]. On the other hand, safety and security sometimes use the same term, but to mean different things (e.g., “incident”, an event with minor consequences in safety, and an infringement or breach in security). Even more ambiguously, the same term may be used by both communities, but in each of them the meaning may vary according to situation, without it being possible to define a consistent correspondence (this the case for instance of the term “weakness”). In this paper, we have tried to use the terms with the strongest security (resp. safety) connotation when dealing

2.2. Link with the concept of dependability Safety and security are often associated to the concept of dependability, specifically in the taxonomy of Laprie et al. [12]. Here, dependability is defined as “the ability to deliver service that can justifiably be trusted”; in such a framework, safety is an attribute of dependability, with availability, reliability, integrity and maintainability; security refers to the availability and integrity attributes and to confidentiality. Nevertheless, such a definition of dependability, although widespread, is not completely consensual (see for instance the definition of the International Electrotechnical Committee (IEC) mentioned below); moreover, in the specific context of our discussion, a significant drawback is that it places safety and security at different 2

with security (resp. safety) concepts. For example, we use “threat” or “hazard” depending on the context.

and the ISO/IEC 27005 Standard on IT Risk Management [27]. However, although the overall approach to risk analysis is similar, the methods, tools and formats are adapted to the source of threats or hazards, and their related consequences, under consideration. Two comments should be made at this point. On the one hand, some emerging risk management strategies prefer to adopt a global approach, considering safety and security together (as stated in Section 2.4, such approaches are out of the paper scope). On the other hand, in some high-risk industries, especially in the nuclear industry, the foundations of risk management vary according to the context, particularly according to the country, the threats or the hazards under consideration [28]. For example, the French nuclear regulatory body, the ASN (Autorit´e de Sˆ uret´e Nucl´eaire), favors the use of a deterministic approach, where systems are designed and operated in relation to “design basis accidents”1 without directly incorporating the notion of likelihood of occurrence, and where probabilistic studies are complementary. In contrast, in countries under AngloSaxon influence, probabilistic approaches have a stronger role in safety demonstrations2 . Similarly, the physical protection of nuclear facilities is generally based on adoption of a document known as the DBT (Design Basis Threat), established by national authorities, which defines the characteristics of potential sources of attack and the reference scenarios for which the protection of a facility must be scaled, regardless of any probabilistic consideration [30, 31]. Moreover, when it comes to the cybersecurity of I&C (Instrumentation & Control) systems, a “graded approach” is often adopted, which proportions the security systems in terms of potential consequences only [31, 32].

2.4. Safety and security interdependencies If safety and security are considered here as distinct matters, this does not deny their tight links. Safety and security can be strongly interdependent: one can be a condition to the other; the associated technical or organizational measures can reinforce each others or be in some cases antagonistic [15, 16]. Mastering such interdependencies is still an open and challenging issue: nevertheless, the present review excludes the tools, methodologies and practices associated with this issue. They would deserve a dedicated review by themselves, partially initiated in [16, 17] and which will be the object of a future journal paper by the authors. 3. General similarities and differences 3.1. Similarities 3.1.1. Risk as a fundamental concept The first point that security and safety have in common is the fact that the concept of risk is used extensively in assessing and managing both. Moreover, risk is defined at a macroscopic level in both cases using the same “formula”: risk = likelihood x consequences. The ISO/IEC Guide on Risk Management Vocabulary [18] provides the following definition: “the combination of the probability of an event and its consequences”, which is consistent with the guides published by the same organization, especially on the use of terminology in the field of safety [19]. Table 1 provides other examples of definitions of risk, confirming a shared view between the safety and security communities. The risk analysis process used when designing a new system or when looking at existing systems tends to address the same generic questions in both safety and security, only looked from different perspectives. Kaplan and Garrick summarized them in the following “triplet” [26]: “What can go wrong? What is the likelihood? What are the consequences?”. In both fields, risk analysis is based around similar phases involving analyzing threats (or hazards, depending on the context) and vulnerabilities (or weaknesses), identifying the potential consequences, assessing the likelihood of occurrence and ranking risks. We can extend the scope of these similarities to risk management in its broadest sense, which includes, in addition to the risk analysis itself, risk evaluation in terms of consequence criticity for the organization, and risk treatment decisions (cf. Fig. 2). Traditionally, the options for treating risk have been put into four categories: risk avoidance, risk reduction (or mitigation), risk acceptance and risk transfer. This applies to both the field of safety and security. Figure 2 presents the stages discussed above in relation to one another in a diagram that is consistent with the ISO/IEC guide to Risk Management Vocabulary [18]

3.1.2. Similarities in design and operation principles In design, both in safety and security, the risk prevention measures and systems are defined first [31]. The further upstream of the design process measures are considered, the more effective and financially efficient their subsequent implementation. Safety and security issues can indeed have major impact on the design of both digital and physical systems and architectures. For instance, safety requirements, like the single-failure criterion in the nuclear industry, lead to redundant, diversified and often physically separate systems; security requirements also have direct consequences for the design and installation of facilities [31]. The notions of risk and graded approach mentioned in the previous section play a central role during the design phase, both in safety and security. In particular, 1 Postulated

accidents that a nuclear power plan must be designed to resist without impact on public health and safety. 2 Zio discusses the origins and implications of these two conceptions, specifically in the nuclear sector, in [29] where he calls them “structuralist” and “rationalist”.

3

Table 1: Examples of definitions of the concept of risk

Safety

Civil aviation (FAA Safety Handbook [20])

Security

Sector and source Nuclear (AIEA Glossary[9])

Chemicals (CCPS Glossary[21]) Oil & gas (OLF-104 [22]) Internet (IETF RFC 4949 [23]) General IT (NIST SP800-53 [24], NIST FIPS200 [25])

Definition of risk A multiattribute quantity expressing hazard, danger or chance of harmful or injurious consequences associated with actual or potential exposures. It relates to quantities such as the probability that specific deleterious consequences may arise and the magnitude and character of such consequences. Risk is an expression of possible loss over a specific period of time or number of operational cycles. It may be indicated by the probability of an accident times the damage in dollars, lives, and/or operating units. Hazard probability and severity are measurable and, when combined, give us risk. Measure of human injury, environmental damage, or economic loss in terms of the incident likelihood and the magnitude of the loss or injury. The combination of the probability of an event and its consequence. An expectation of loss expressed as the probability that a particular threat will exploit a particular vulnerability with a particular harmful result. The level of impact on agency operations (including mission, functions, image, or reputation), agency assets, or individuals, resulting from the operation of an information system given the potential impact of a threat and the likelihood of that threat occurring.

Risk management

Risk assessment

Threat identification and characterization

Vulnerability identification and characterization

Consequences evaluation

Risk estimation (assignement of parameter values, quantification)

Risk evaluation (ordering and evaluation of importance)

Risk treatment (avoidance, reduction, transfer, acceptance)

Figure 2: Risk management and analysis

4

Risk communication

Risk monitoring

Risk analysis

Perimeter definition: the system and its technical and functional components

the defence-in-depth approach, initially deployed in military circles, and then in nuclear safety [33], is now applied to other sectors, in computer security [34] or physical security [35]. Section 4.1.2 looks at the substance of this approach and the history of mutual inspiration between safety and security in this area. However, we can comment here, like [31], that the field of security often implements a first level dissuasion, which is only appropriate for intelligent and malicious threats, not relevant from a traditional safety perspective. Generally speaking, although the principle of defence-in-depth is relevant to both safety and security, its deployment varies to take into account the differences in the origin of risks and the nature of consequences.

3.1.3. Non-additivity and non-compositionality In contradiction with what common sense might suggest, safety and security measures are rarely cumulative. To illustrate this point, Deleuze et al. cite in [15] the example of security guards: several security guards working at the same time become less alert individually because they place their faith in the vigilance of the other guards. In computer security, there is also a good example in cryptography with the use of the DES (Data Encryption Standard) algorithm. For a long time, DES was seen as the reference in symmetric encryption; it provided adequate protection for sensitive but unclassified information until the beginning of the 1990s. The limited size of its key (56 bits) and design weaknesses have since led to its replacement by AES (Advanced Encryption Standard) [42]. Double encryption, using the same DES algorithm but with a different key (an operation known as Double DES), falls a long way short of doubling the strength of the encryption: the effective strength of this kind of double encryption is only 57 bits instead of the 112 bits anticipated [43]. In another field, Anderson presents a telling example of a series of measures that failed one by one in 2007, resulting in six armed thermonuclear bombs unintentionally flying in a bomber across the USA airspace, and which were then left for several hours without surveillance [42]. Furthermore, safety and security are not always modular [1, 44]: the assembly of two components considered as safe or secure does not necessarily form a new system that inherits these qualities, which also raises great difficulties in terms of the certification of modular systems [45]. Ultimately, security and safety must therefore be assessed in overall terms.

From the standpoint of software quality, safety and security require the use of specific and often costly development techniques and processes [13], intended to limit hazardous behavior and security vulnerabilities (see Sections 4.4 and 5.1 for examples). In operating conditions, there are also many similarities. Furthermore, safety and security both require detailed monitoring and in-depth knowledge of the system and any upgrades [31]. Actual safety (e.g. [36]) and security (e.g. [32, 37, 38]) codes of practice, for example, require inventories to be maintained, changes to be tracked, temporary fixes to be recorded, etc. The notion of preventive maintenance also plays a key role, in both safety [28] and security [39]. Operating feedback must be reviewed regularly and in great detail, providing updates for codes of practice, which are also updated from detailed monitoring of regulatory, scientific and technical evolutions. Compliance with safety and security rules and codes of practice is re-examined via audits and dedicated inspections. Crisis management in safety and security are also similar in many ways [31]: they both involve the establishment of an emergency plan and the completion of regular test exercises. Exercises can be used to check that plans and available resources are adequate for potential accidents or attacks; to assess the training of personnel and the times associated with the various stages of the plan; to test the decision-making processes or even to improve interfaces and coordination between the various entities. Having said this, the detail of the procedures and especially the entities involved, may also change depending on the sources of threats or hazards and the type of consequences considered (for instance, the State plays different roles depending on the situations, as discussed in Section 3.2.5).

3.1.4. Other similarities Safety and security, the eternal killjoys. As pointed out in [13, 2, 46], security and safety share a common burden: they both imply guarding against high consequence events which are essentially “negative” (attacks, accidents), in contrast to desired outcomes, which are essentially “positive” (service delivered, goods produced). As a consequence, they are often perceived as holding back productivity or more generally as obstacles to the functional requirements and aims of the systems or organizations analyzed. Moreover, in such conditions, it can be especially tricky to assess them. On the one hand, this analysis requires a degree of objectivity among stakeholders that is difficult to obtain, and on the other hand, it requires detailed knowledge of sensitive areas, not always compatible with recourse to external experts. Furthermore, whereas efficient risk management requires good communication of early warning signs (“weak signals”), this can be hindered by managers who are tempted to suppress them or manage them locally [47]. Evaluating efforts to improve safety or security can be even more problematic; this can often only be done in the conditional sense, in terms of potential situations and outcomes avoided. Put simply, investment is generally difficult to justify in practical terms based on

Finally, research work and common sense concur in denouncing complexity as the enemy of both safety and security. In computer security, the number of vulnerabilities in an application can be crudely linked to the number of lines of code used [40], but it can also be linked in a more sophisticated manner to the wealth and diversity of its functionalities [41, 42]. In terms of safety, we can refer to first part of EDF R&D’s collective work on this topic, devoted to industrial risk and the management of complexity [28]. 5

ROI (Return On Investment): both safety and security are therefore often among the first areas to be affected by cuts during tight budget negotiations. Sadly, many industrial catastrophes support this observation (e.g., the BP oil refinery explosion in 2005 [48]). Lastly, decision-makers tend to have the same kind of irrational behavior when exposed to security or safety risk: Choo cites generic examples of distortion in risk perception in [49], Schneier [50] and Anderson [42] describe examples that are more specific to computer security. Fortunately, the development of specific safety and security cultures, combined with adapted regulatory obligations, can lead to a greater accountability among management and to a reduction in the difficulties mentioned.

the State appoints a competent authority (e.g. in France, this is the ASN for nuclear safety, the ANSSI4 for the IT security of governmental infrastructures), implements an authorization system (in the form of licensing in AngloSaxon countries), assesses the provisions put in place by operators and inspects the facilities. Depending on the situation, regulatory frameworks may rely on a single authority covering both safety and security, or on distinct but coordinated authorities. International bodies seek to harmonize practices between States (e.g. the IAEA5 for nuclear safety and security); industrial associations fulfill the same function between private companies in some sectors (e.g. the IATA6 in aviation). The methods adopted to implement regulatory and legislative systems vary from one country to another; for example, the scope for initiative can be very different for operators. Lastly, we should also note that States are generally more involved in security issues (see Section 3.2.5).

Security and safety culture. Safety culture is that “assembly of characteristics and attitudes in organizations and individuals which establishes that, as an overriding priority, protection and safety issues receive the attention warranted by their significance” [9]. Security culture can be defined in similar terms. In our understanding, both share significant similarities: in both cases, the explicit commitment of senior management, a proactive training policy and understanding of all participants of the stakes and the role they play in terms of safety and security are essential components [31, 51]. To achieve this, it is fundamental for threats to be believed to be credible [51]; safety/security cultures must foster an attitude of general vigilance and a constant, proactively questioning outlook among personnel. It is also of central importance to share and discuss information (but with different methods, see Section 3.2.4). In spite of their deep similarities, the two cultures are not however one and the same (see for instance their difference in terms of information handling, developed in Section 3.2.4); they must co-exist, providing mutual improvement and support to each other [31, 51].

3.2. Differences 3.2.1. Differences in the rating of consequences Another difference highlighted in [2], which is more subtle and also more open to discussion, concerns the rating in terms of consequences in risk analyses. Rating generally tends to be more marked in safety than in security, there being many more intermediary levels between the simple incidents, which can be tolerated (regulatory limits), and incidents associated with potentially catastrophic accidental sequences, which are always considered to be unacceptable. For example, the DO-178B [54] governing software safety in aeronautics features five levels of criticality; the international scale, INES7 , rating the seriousness of nuclear safety-related events, has seven [55]. In contrast, the risk analysis methods used in security are often less prescriptive when it comes to the number of levels to use; moreover, they typically present examples of scales that rarely exceed three to four levels. This difference can be explained by the inherent difficulty in assessing the consequences of an attack, which are often deliberately concealed by the attacker, who is trying to limit the probability of detection. In more conceptual terms, it is difficult to avoid adopting a binary approach when assessing the consequences of a security event: it is either considered as informative when it corresponds to an authorized action but which is significant from a security standpoint, and possibly suspicious, or it is considered to be an infringement and therefore points to a potential complete failure of the defence(s) in place. Another illustration of this difference in rating, more binary in security than in safety, can be found in the notion of the “broken” cryptographic

The role of the human factor. The human factor plays a critical role in both safety and security. This role was acknowledged in computer security later, especially following social engineering attacks3 , made popular by the hacker Kevin Mitnick [52]. References [42, 40] provide a comprehensive, up-to-date description of the issue, which has been increasingly studied since the start of the 1990s [53]. In safety, the incident at the Three Mile Island nuclear plant in 1979 was a turning point, leading to earlier and more significant efforts in this area, of which [28] provides a good overview. Regulatory and legal framework. The main principles that underpin the safety regulatory framework and those governing the security of high-risk industrial facilities are, at a macroscopic level, very similar [13, 31]: in both cases,

4

Agence Nationale de la S´ ecurit´ e des Syst` emes d’Information, i.e. the French network and information security agency. 5 International Atomic Energy Agency. 6 International Air Transport Association. 7 International Nuclear Event Scale.

3 All of the non-technical resources, such as manipulation and breach of trust, that enable an attacker to obtain information that may be useful in the development of his/her attack.

6

algorithm. As soon as the cryptology community identifies a theoretical weakness in an algorithm, it is considered to be broken, even if no feasible attack has been mounted in practice. Only those algorithms that have no identified theoretical weaknesses are recommended. The saying “attacks only get better, they don’t get worse”, credited to NSA8 , reflects well this kind of behavior. These precautions are also supported by the inherent difficulty in knowing about the resources and the skills of potential attackers.

analyzed. Shifts towards irrational and paranoid behavior [56] are therefore more frequently seen than for safety-type risks, which are more suited to objective, or at least rational, reasoning. In light of these considerations, we can see more clearly the appeal of deterministic approaches, simplifying the term “likelihood” used in the macro-equation of risk presented in Section 3.1.1 and instead adopting a rating of counter-measures indexed only on the seriousness of potential consequences. 3.2.3. Separate theoretical and methodological frameworks We are interested here in the general differences between the techniques used to assess the level of safety of a system and those used to assess its security. The first observation, also made by Line et al. [2] and Nicol et al. [60] is that quantitative methods are historically more widely used and industrialized in the field of safety than in security. The specific features of security, described in the earlier sections, explain this situation, since security threats are by their very nature difficult to characterize in quantitative terms; qualitative methods, combining a prescriptive approach and expert opinions are often preferred. Moreover, quantitative methods are mainly based on probabilistic approaches. Generally speaking, the probabilities to be characterized can be assessed using one of the two following approaches: by practical tests and series of measurements, performed on the actual system or a prototype in the operating environment; or by using abstract models, only capturing partial data, where their representativity is very dependent on the format adopted and the behavior of the system (such approaches are often referred to as model-based approaches). Safety and security do not lend themselves to these two approaches in the same way. So, whereas it is possible to assess the security of a system in practical terms using intrusion tests, it is much trickier to test full-scale safety scenarios, especially when there may be catastrophic consequences involved [61]. In fact, in the field of safety, testing and measuring methods tend to be concerned more with quantitative assessment of attributes, such as reliability or availability, of certain system components that play a critical role in the safety scenarios to be studied. Model-based approaches are used in preparation for test-based approaches or when testing methods are impractical or too expensive. Having said this, we should once again make a distinction between their use in safety and security. First of all, these approaches are widely adopted for safety studies in industry [62], whereas they remain rather marginal in security [60]. Furthermore, model-based approaches can be divided into two categories [62], unevenly adapted to safety and security issues. A distinction shall be made between those involving static (or structural) models, which assume independence among components, and those involving dynamic (or behavioral) models which allow the notion of sequences, and more broadly, of dependence among states and components over time. Whereas safety studies are based on one category or another [62], in security, the

3.2.2. Assessment of threats/hazards Assessing a (security) threat is radically different from assessing a (safety) hazard. In the first case, the sources of the threat(s) to be assessed are usually not well-known by the analyst, and cover an extremely broad range of possible scenarios [56, 42, 57, 58]. In the second case, the characteristics of the hazard(s) are more accessible, and the number of scenarios to be taken into consideration may also be reduced to a set that is restricted but sufficient to be considered as being significant. In security, the threat is potentially intelligent and capable of adaptation. Particularly, as stressed in [15, 35, 59], it can adapt in relation to the vulnerabilities of the system under consideration, or even to counter-measures and defensive responses, whereas hazards and weaknesses do not have this kind of dynamic interaction in safety. Similarly, once the hazards have been identified in safety, they are often considered to be relatively stable over time, meaning that an approach involving reference scenarios can be adopted; in security, the profiles, motivations and resources of attackers change more rapidly and less predictably – they depend on many factors, often broader in scope than in safety, and can include the geopolitical or even the economic context. The fact that there is a kind of race between attackers and defenders also contributes to the instability and ever-changing nature of these factors. Threat reference scenarios must therefore be updated much more frequently. Furthermore, hazards can often be characterized by adopting a statistical approach. In particular, the probabilistic modeling of events such as breakdowns, involved in safety scenarios, frequently uses large amounts of archive data (e.g. IEEE for electronics and electrical engineering). In security, these kinds of statistical approaches cannot be adopted to characterize maliciousness: among other aspects, too little data is shared and pooled for reasons of image and confidentiality (see Section 3.2.4). We will discuss the use of probabilities in security in more detail in the following section. Finally, the inherent difficulties in assessing security threats also allow for a great deal of subjectivity. This is particularly the case of risk perception by the public in the case of national security or high-risk industrial facilities, and more generally perception by users of the system 8

National Security Agency.

7

adaptive nature of the threat and the attacker’s intelligence clearly favor dynamic modeling. More generally, probabilistic approaches in security raise fundamental questions that remain largely unanswered, directly linked to the difficulties of characterizing threats which, by definition, are at least to some extent beyond the reach of all forms of objective analysis. Some analysts consider that the use of probabilities, as a powerful uncertainty characterization tool, is therefore suited to this purpose [63, 64]; others, however, believe that maliciousness eludes probabilistic modeling, or at least, make it very difficult to use [65]. Moreover, this kind of modeling must cover many more aspects in security, due to the larger number of unknowns to be characterized: for example, the discovery of vulnerabilities, their concrete exploitation, the behavior of counter-measures both in terms of detection, prevention and reaction, and the dynamic interactions between the various elements (simpler, or even non-existent in safety). From a more fundamental perspective, the use of probabilities in each field is in fact part of a different conception of this notion. Epistemologically and in simple terms, probabilities can be considered from two points of view [66]. The “frequentist”, objectivist point of view is based on looking for frequency in a set of data, which assumes being able to repeat similar experiments, potentially infinitely, to characterize an aspect of the targeted system. According to this approach, there is no probability without a set of repetitions. From a subjectivist point of view, however, probabilities are – to adopt the terms used in [66] – “a measurement of trust, reasonable hope and digital coding of a state of knowledge” and can therefore particularly deal with one-off phenomena. For frequentists, probabilities are characteristic of the event itself, whereas for subjectivists, probabilities are conditioned by the degree of knowledge about the phenomenon to be characterized and its causes, which may be changing. Although in safety the use of probabilities can fall both under the frequentist and subjectivist approaches, depending on the aspects studied [28], it seems to be clearly more characteristic of the second approach in the field of security [64]. Finally, in both safety and security, a probabilistic model may be quantified analytically, i.e. by using mathematical formulae directly linking parameters of the model to the sought values or using a Monte-Carlo simulation [67]. This approach is extremely versatile and can be used to quantify many types of models, but its performances vary and it can be time-consuming [62]. Another difference between safety and security can be seen here. The performance of approaches using Monte-Carlo simulations is directly linked to the value of the sought probabilities: the lower the probabilities, as is the case for breakdowns in safety systems with extremely robust designs, the longer the simulation takes. As stated in [68, 69], probabilistic models in security involve larger probabilities when compared with those used in safety. This is because they are associated with maliciousness and a determination to attack that is already effective, which corresponds to con-

ditional probabilities of failure given the attacks. Larger probabilities impact the tools treating the related probabilistic models. For instance, it calls for fault tree tools (or attack tree tools, see Section 4.2.1) allowing exact calculations of top event, as approximate methods based on bounds are not applicable in these cases [70]. Moreover security modeling may require the need of non-coherent fault trees [71]. In this situation, the performances of MonteCarlo simulations are also clearly better than in safety, where probabilities are lower. 3.2.4. Access to information and shared operating experience Information confidentiality is both a specific aim of security and a strong component of its culture. The malicious nature of the risks under consideration explains this marked difference with safety, where transparency and broad access to information are most often sought. In security, threat evaluations, risk assessments and descriptions of counter-measures are all considered to be highly sensitive information, which could be used maliciously if it were to fall into the wrong hands. This said, safety and security are both strengthened by sharing experience and know-how: only the methods associated with this sharing differ significantly, due to the requirement for confidentiality [31]. 3.2.5. Even wider involvement of the State in security In their approach comparing nuclear safety and security [31], Jalouneix et al. note wider involvement of the State in nuclear security. This observation may be extended to the security of critical industrial facilities and infrastructures. We can think of several reasons for this situation. On the one hand, whereas safety provisions are generally implemented by operators, we cannot expect them to be responsible for all of the measures required to manage malicious risk well: they lack the resources and the legitimacy to do so. The State has a role to play here, especially in terms of providing information about malicious risks; it can also take part in the response through the intervention of the police/law enforcement officers or the judiciary; it lays down the confidentiality rules for information pertaining to national security and contributes to security vetting for access to sensitive data. On the other hand, in safety, operators control the source of the risk to a larger extent since they are responsible for the system, whereas in security, the threats are by nature more difficult to control and may come from the outside: in such cases, the State is the most capable of acting. 4. From safety to security 4.1. Architectural concepts 4.1.1. From fault-tolerance to intrusion-tolerance Fault tolerance is one of the main categories of dependability techniques [12]. It enables the system to perform 8

its function and to provide the required service despite faults. Conventional fault tolerance techniques include redundancy, diversification and separation [72]. They have been used in dependability in sensitive sectors such as space and the nuclear industry for many years, particularly for systems that perform safety functions. The principle of fault tolerance was not considered and then transposed to the security domain until quite late; for a long time the security domain only considered attack prevention and detection aspects (i.e. malicious faults caused by intrusion or hostile programs) [4]. The first transpositions were made in the middle of 1980s when the LAAS9 introduced the intrusion tolerance concept [7, 8] and proposed an application of this concept through a so-called “Fragmentation-replication” technique. This technique was inspired from [73] and would be improved in many works (e.g. [74]) under the name Fragmentation-Redundancy-Scattering (FRS). It uses the principles of redundancy to protect information confidentiality, integrity and availability by a specific breakdown into redundant fragments. At the same time, Dobson and Randell at the University of Newcastle-upon-Tyne in the United Kingdom defended a similar approach called security fault tolerance [75]. They more widely emphasized the relevance of dependability tools and approaches in computer security, and made strong arguments for their use. LAAS teams and Newcastle teams shared the same opinion about limitations inherent to so-called “conventional” security approaches centered on prevention and detection of faults. Unfortunately, the complexity and the increasingly distributed nature of systems and the unpredictability of attackers make successful intrusions and attacks inevitable; systems must also be designed to tolerate them. The idea took root, the intrusion tolerance concept crossed the Atlantic [76] and as work on it increased, it has progressively led to a discipline that is now still exploratory but is well organized. Verissimo gives an excellent summary of it in [77]. In particular, approaches to intrusion tolerance benefited from extensive cooperative research projects in the United States under the driving force of the DoD (particularly the ITS and OASIS programs10 ) but also in Europe within the European Framework Programmes (FP) for Research and Development. Note in particular the MAFTIA project11 [78] carried out from 2000 to 2003 which developed a conceptual model unifying fault tolerance and security concepts, and brought numerous scientific results in the field of protocols and middleware for intrusion tolerant internet applications [79]. Furthermore, intrusion tolerance is increasingly considered as being a component of wider concepts such as survivability, following work on this concept done by Carnegie

Mellon University [80], and “resilience” around which a large amount of European research in the field now appears to be organized. The results of the CRUTIAL project12 [81, 82], and also the results of the AMBER project13 and the ReSIST excellence network14 dedicated to the study of resilience for large complex computer systems, are consistent with this momentum. Finally, other work has considered specific aspects of fault tolerance techniques for security, without specifically integrating them into a more global intrusion tolerant architecture. Thus, Littlewood and Strigini studied links between redundancy, diversity and security in [83]; Totel et al. use diversity-based approaches to improve the performance results of intrusion detection systems [84].

4.1.2. Defence-in-depth The so-called defence-in-depth approach is now considered as being a fundamental principle of computer security. Although the concept itself has been used for centuries in the military world (see Section 3.1.2), the term was first made popular through safety in the design of nuclear power plants. A specific application appeared in modern reactors through three successive independent barriers (fuel cladding, reactor vessel and reactor containment) to confine radioactive material [85]. More generally, the idea is that each device should a priori be considered as being vulnerable; each barrier must be independent of the others and self sufficient in protecting the environment. Another application of defence-in-depth is the combination of a specifically safety-oriented design, with procedures and operational control also adapted to this purpose. It also leads to diversification and redundancy of safety systems in the power plant [85]. However, although the defence-indepth concept is also used in the field of computer security (e.g., [86, 87, 88]), its use is not as well-defined as in safety (for instance by the IAEA [85]) and changes depending on the contexts. In particular, the independence of the barriers, and the need to cover both design and operational aspects, are often forgotten. The notion of DiD in computer security is actually closer to a simplification of the Reason’s so-called “Swiss cheese” metaphor [89] (also derived from the safety world), illustrated in Figure 3. The undesired event occurs only if several barriers fail. Reference [34] proposes an analysis and rationalization of the defence-in-depth approach for computer security. For both safety and security, independence and diversity play a key role to ensure the efficiency of DiD. Finally, note that the ”security in depth” expression is also used to express the same concept in physical security.

9 Laboratoire d’Analyse et d’Architecture des Syst` emes (LAAS) – System Analysis and Architecture Laboratory. 10 The http://www.tolerantsystems.org site provides access to the different projects of these programs. 11 Malicious-and Accidental-Fault Tolerance for Internet Applications.

12 13 14

9

Critical UTility InfrastructurAL resilience. Assessing, Measuring and BEnchmarking Resilience. Resilience for Survivability in IST.

recently been adapted to security modeling [116, 117]. BDMP offer a valuable and unique trade-off between readability, ease of appropriation, modeling power and quantification capabilities in the domain of attack graphical modeling. Indeed, the domain is dominated by two approaches: attack trees, mentioned in the previous section, which are very readable but lack modeling power because of their static nature, and Petri-nets and their derivatives, extremely powerful but difficult to build and read for security experts [117]. In this respect, the adaptation of BDMP constitutes another good illustration of a fruitful cross-fertilization from safety to security.

Absent or defectuous barriers Accident (mishap)

4.3. Structured risk analyses Figure 3: Illustration of the Reason’s “Swiss cheese” model [89]

4.3.1. HAZOP in security HAZOP (HAZard and OPerability studies) is a qualitative risk analysis, structured around the use of guide words and tables [118, 119]. Invented by the chemical industry in the 1960s, it has then spread to other domains like the oil industry, and has been adapted to different contexts. In particular, Winther et al. were the first to propose the use of HAZOP in computer security, and to define more appropriate guide words than those used for the safety of programmed systems [120]. Risks dealing with confidentiality, integrity and availability are covered from a security point of view. In her thesis, Foster adapts HAZOP to improve collection and analysis of needs and requirements of security protocols before their actual design, in an approach called Vulnerability Identification and Analysis (VIA)[121, 96]. Srivatanakul [122] uses the expected system behaviors formulated in UML (Unified Modeling Language) use cases to feed in an adapted version of HAZOP. He precisely describes how the approach should be applied, with specifically chosen guide words for security. Several detailed application examples are also given. Globally, the use of HAZOP in security can identify new risks that a less systematic macroscopic analysis might not identify. The use of HAZOP also organises questioning and forces the analyst to consider unusual scenarios. However as in safety, team work is still necessary to provide sufficiently broad coverage. Furthermore, the abstraction level related to the restricted list of guide words can hide risks that will not be considered; for example this is the case for communication aspects in Srivatanakul approach [122]. The choice of guide words is always critical and requires maximum attention. Finally, the method is fairly difficult to use and is intrinsically repetitive.

4.2. Graphical formalisms 4.2.1. From fault trees to attack trees Attack trees are a graphical representation of attacks in a logical tree structure: the attacker’s goal is the root, and the different means to achieve this goal are the leaves of the tree, connected by AND/OR logical gates. Although Schneier’s paper [6] is often cited as the seminal paper, the concept and the term have been in fact developed earlier [90]. Furthermore, a very similar kind of models, called threat trees, and on which attack trees have been built upon [90], dates back from even earlier efforts [91] (referenced in [90]). Such models have been directly inspired by fault trees, as mentioned in [91]. Attack trees are to our knowledge the only adaptation of a safety tool to security that has been widely used in practice outside the academic field (particularly in the United States). The work done by Schneier [6] has made this type of model popular: it is now used in a wide variety of contexts (e.g. for SCADA systems [92, 93, 94] or protocols [95, 96], online-banking [97], e-voting systems [98], mobile ad-hoc networks [99], smart metering and metrology [100, 101, 102]) and included in various risk analysis methods (e.g. [103, 104]). Many extensions have been made to it in more exploratory approaches, including the integration of countermeasures (like in defence trees [105] or in attack-defense trees [106]), the interface with fuzzy logic [107], game theory [108, 109, 110] and the consideration of time and dynamic aspects [111, 112]. 4.2.2. BDMP for attack modeling BDMP (Boolean logic Driven Markov Processes) is a graphical formalism visually close to fault trees, but which is in fact a dynamic model enabling the specification of big Markov chains in a compact form [113]. It has been industrially used, mainly in France, for numerous quantitative safety studies, including power substations [114], electrical supplies of data centers [115], manufacturing plants, offshore windmill farms, safety systems of nuclear power plants [113] or hydraulic safety systems of dams. It has

4.3.2. From sneak circuit analysis to sneak path security analysis Sneak circuit analysis and sneak analysis have been used in the US for safety analyses since the 1960s [123], mainly in the aerospace and nuclear industries. In 2003, Baybutt proposed an adaptation to the field of computer security: he presents a nine-step approach called “sneak path security analysis” in [124, 125], based on an analysis of the net10

work topology of the architecture to be designed to identify potential attack paths. These paths connect “sources” (internal and external attackers) to targets, exploiting vulnerabilities or bypassing computer security systems. The results are recorded in the form of tables, and can be used to systematically produce recommendations. Obviously, the analysis depends closely on the precision and completeness of the description of the architecture as input data (as stressed by Srivatanakul in [122]).

are regularly evaluated by regulators. A safety case typically covers [131]: • the definition of the scope of the system or the activity concerned, and a detailed description of its context or its environment; • the management system used to achieve safety; • a mention of regulatory and legal requirements, applicable standards and baselines, with evidence that they are respected; • evidence that risks have been well identified and appropriately checked, and that the residual risk level is acceptable; • guarantees about the objectivity of the arguments and the evidence

4.3.3. Zonal Safety Analysis (ZSA) Contrarily to the methods relying on a functional decomposition, Zonal Safety Analysis (ZSA) focuses on spatial repartition of components, and how their proximity can lead to faults even if functionally independent. Developed in the aeronautical industry [126], it can for instance take into account the proximity of hydraulic system with electrical systems, despite their functional independence. Srivatanakul’s thesis presents an adaptation of ZSA concepts to security [122]. He includes physical and logical (data processing) aspects, but also behavioral and temporal aspects. He analyzes various types of channels that can exist between these zones including physical proximity, functional dependence, environmental dependence, procedural adjacency and time relation. The method is based on the use of HAZOP type guide words and also formalizes the analysis results in the form of tables. Its main advantage is that it takes account of transverse dimensions that are often ignored in more vertical analyses.

The safety case concept was transposed and generalized in the software field as assurance cases [134] . Obviously, assurance cases can be used themselves for safety purposes [134, 135], but they may also be associated with other properties. An international working group met regularly between 2004 and 2007 to work on an adaptation of the concept to the field of computer security (assurance case for security or security assurance case). It included representatives of the European Union JRC and Carnegie Mellon University. Reference [136] contains a summary of its work that was used in many academic publications and opens up possibilities that are still relevant. Furthermore, the approach appears to have built up momentum in the United States, particularly through US CERT’s Build Security In initiative15 that promotes work done subsequently by Carnegie Mellon University [137, 138]. Finally, Kelly’s thesis describes a notation called GSN (Goal Structuring Notation) [139], graphically organizing arguments, evidence and justifications making up a safety case. It improves its readability, but also its communication and maintenance. Such a notation remains relevant in the framework of security assurance cases discussed above. Moleyar and Miller used it to increase the formalism of attack trees and associate counter-measures with vulnerabilities [140]. Alexander et al. gives a recent state-ofthe art in [141] about security assurance cases, and how GSN can be used in such contexts. Finally, note that the CAE (Claims-Argument-Evidence) notation, developed by the consulting company Adelard as an alternative to GSN, could also be used for security purposes, instead of their initial safety usage.

4.3.4. From FMEA to IMEA The FMEA (Failure Modes and Effects Analysis) approach [127] is one of the most known and used methodologies in safety analysis. Aagedal et al. used it in a security context in the frame of the CORAS European project [128]. More recently, Gorbenko et al. adapted it to security, and renamed it IMEA (Intrusion Modes and Effects Analysis) [129]. Instead of describing random failure modes, inputs to tables describe intrusion techniques. The approach is described and used on a Web Services architecture in [129]; an IMEA analysis of a SCADA system is given in [130]. 4.3.5. From safety cases to security assurance cases The safety case concept is used in risk industries especially in the United Kingdom. Inge gives a history of it in [131]. A safety case may be defined as being “a structured argument, supported by a body of evidence that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given operating environment” [132] or “A documented body of evidence that provides a convincing and valid argument that a system is adequately safe for a given application in a given environment” [133]. The underlying motivation for safety cases is simple; rather than having to regularly update regulations to adapt to practices and techniques, the responsibility is transferred to operators, whose safety cases

4.3.6. GEMS (General Error Modeling System) In [142], Brostoff and Sasse are making a case to take account of non-technical security aspects at the design stage, and in particular the human factor and the sociotechnical aspects of the systems considered. They propose to do this using Reason’s GEMS (General Error Modeling System) 15 https://buildsecurityin.us-cert.gov/bsi/artciles/ knowledge/assurance.html

11

model [89], derived from safety. GEMS in particular makes a distinction between latent faults and active faults taking account of human behavior, going beyond the framework of analyses usually made in security.

from models must provide security metrics, enabling the comparison of different products. Ozment specifies the necessary assumptions and difficulties in such an adaptation, and he compares the quality of predictions made by the various models regarding the history of vulnerabilities in his case study. The results obtained are encouraging, but he freely admits that input data are insufficient to justify credibility of his deductions.

4.3.7. From SIL levels to SAL levels Since it was first introduced at the beginning of the 1990s, the SIL (Safety Integrity Level) concept has been occupying an increasingly important place in the field of E/E/EP (Electrical/Electronic/Programmable electronic) safety-related systems. It was initially introduced sector by sector into national standards (see the first version of military standard Def-Stan-00-55 [143] in 1991 in the United Kingdom and ISA16 84-01 in 1996 in the United States [144]), and it was subsequently generalized and internationalized by IEC 61508 [36]. In 2008, Kube and Singer proposed Security Assurance Levels (SALs) in [145] inspired from SIL safety levels; the concept has then been developed by Gilsinn and Schierholz in 2010 [146], before its inclusion in the ISA99 series of standards [147]. The EDSA17 certification [148], dealing with cybersecurity of industrial control systems, also makes direct use of these SAL concepts and specifically mentions the relation with IEC 61508. A joint workgroup set up by ISA 99, ISA’s technical committee on security of industrial computer systems, and ISA 84 responsible for safety aspects, has been continuing this work [149].

5. From security to safety 5.1. From security kernels to safety kernels The concentration of critical security functions for a system in a kernel distinct from the remainder of the system, is a frequent approach used in computer security, implemented as early as the 1970s [155, 156]. At this time, the kernel actually included control functions necessary for the implementation of multi-level security policies [157]. In 1986, Rushby claimed that this architectural approach could be used for other properties associated with safety [158]. He gave options for formalization and discussed implications for software architectures. A few years earlier, Leveson had also emphasized the advantages of kernel-based architectural approaches for safety software [159]. In particular, she emphasizes the advantages of a kernel with limited size and complexity, limiting errors, and providing better maintainability. Several safety-oriented software architectures have been proposed since then based on these principles (e.g. [160, 161]).

4.4. Test techniques and related aspects 4.4.1. Fault injection Fault injection techniques in hardware (electrical or radiative corruption) or software (corruption of variables, registers or memories), form part of the dependability arsenal and have been used for several decades to eliminate and predict faults [150, 72]. The techniques are applied to existing systems, possibly prototypes or in design, based on simulation models. In 1996, Voas suggested that these techniques should be adapted for computer security, adaptively injecting incorrect inputs into an application code to provoke abnormal behavior of the software for malicious purposes [151]. Since then, the principle has been broadly reused and techniques have been improved, both from a defensive point of view for the discovery of vulnerabilities before release of software, and an offensive point of view to attack existing applications [152].

5.2. Formal models for safety properties Numerous formal security models have been specifically designed to characterize and rigorously check properties of access control policies (e.g. the Biba model [162]). Some of these formal security models have been considered in a safety problem. In 1998, Simpson et al. [163] took inspiration from the non-interference property introduced in security by Goguen and Meseguer in 1982 [164], to rigorously model fail-safe (fault with no impact on safety), fail-stop (fault and associated repairs with no impact on safety) and fail-operational (fault and associated repairs with no impact on safety nor on functional aspects of the system) behaviors. Formalization was done in CSP (Communicating Sequential Processes) [165]. Shortly afterwards, Stavridou and Dutertre [166] more globally reconsidered the idea of adapting formal security models to safety, in particularly broadening the study to other safety properties. Finally, Totel et al. proposed a model in [167] for checking integrity in order to integrate components (including COTS) with heterogeneous criticality levels into architectures with high safety stakes. This model can allow two-directional communications between objects with different criticalities, under some conditions and validations. The theoretical object-oriented model was followed by experiments

4.4.2. Reliability growth models for security In 2005, Ozment [153] applied “reliability growth” techniques for fault prediction and the evaluation of software reliability, to security by using various approaches modeling the rate of discovery of vulnerabilities in software. Rescorla had suggested the approach in [154], but Ozment considerably extends the approach concentrating especially on Open BSD vulnerabilities. Estimates derived 16 17

International Society of Automation. Embedded Device Security Assurance.

12

within the framework of the European GUARDS18 project about ten years ago [168], and continues to attract interest in the field of avionics (see recent work done by LAAS on this subject [169, 170]).

EAL) is remarkable [2, 61]. It was emphasized many times both by the security community [145] and the safety community [1, 181]. Novak et al. made use of the two approaches to define their development and operation model aimed at integrating safety and security [182, 183, 184] (although this subject is per se out of the scope). Several European projects worked on reconciliation of the SIL and EAL concepts; as early as the second Framework Programme (FP) (1987-1991), the Drive Safely project studied the complementarity of the ITSEC security levels, ancestors of the Common Criteria, and SILs, in what at the time was still a preliminary version of IEC 61508 [181]; this approach was reused by the ACRUDA project19 in the 4th FP. At the end of the 1990s, the SQUALE project20 [185] proposed to use a confidence level between 1 and 4 with each major attribute of the Laprie and Avizienis’s [12] taxonomy (see Table 2), forming a dependability profile. Levels graduated from 1 to 3 are added, characterizing the rigor, detail and independence of the related evaluation activities, in the spirit of the EALs. Deliverable [186] details correspondences between SQUALE criteria and SIL concepts as defined in the related standards, and with other references such as DO178B in safety, ITSECs and Common Criteria in security. Subsequent to the Drive Safely project, the British automobile industry also continued integration of SIL and EAL type approaches to lead to results integrated into MISRA recommendations21 [187]. As another example, Section 4.3.7 mentions the work done by Kube and Singer and the ISA to adapt SIL concepts in security to the field of industrial control systems. Finally, various correspondences have been produced between EALs and the levels of the standard DO-178B [54] in the avionics field, like the one proposed by Alves-Foss in [188].

5.3. Misuse cases and Misuse sequence diagrams in safety Several adaptations of the UML use-case diagrams have been made to security (mainly the abuses cases by McDermott [171], the security use cases by Firesmith [172] and the misuse cases by Sindre et al. [173]). Misuse cases are the most cited and documented. In [174], Sindre examines the advantages of the security formalism of misuse case diagrams for safety. In particular, he compared them with four conventional approaches in the field, namely fault trees, HAZOP, Cause-Consequence Analysis (CCA) diagrams and FMEA. Note that Alexander had briefly considered a similar use in [175, 176], without developing it, as done by Sindre. Reference [177] contains a comparative study between the use of misuse cases and FMEA by a group of students on a case study, to better identify the advantages and limits of the two approaches. Misuse cases are not suitable for all situations, particularly for systems dominated by continuous and physical processes; furthermore, they cannot be substituted for traditional safety methods, but they are complementary as they are relevant in earlier phases of the system design [174]. More recently, Katta et al. have combined UML sequence diagrams with misuse cases in a new formalism called misuse sequence diagrams [178]. Raspotnig and Opdhal have then turned this security formalism into a safety one called failure sequence diagram [179], complementing classical FMEA safety analyses. 6. An overview of safety-security mutual inspirations

Table 2: SQUALE levels

6.1. An example of mutual influence: SIL and EAL levels The previous sections presented methodological influences between safety and security as being unidirectional; they corresponded either to adaptations of techniques or approaches from the safety domain to security (Section 4), or from security to safety (Section 5). In fact, inspirations and influences between these two communities are not systematically straightforward, and sometimes form part of a more subtle dialectic. This can be illustrated through the example of SIL (mentioned in Section 4.3.7) and Evaluation Assurance Levels (EAL), defined in the Common Criteria security standard [180]. Unlike a SIL in its domain, an EAL does not give any information about the absolute security level of the evaluated product, but rather on the evaluation quality of security functions that are defined differently in each evaluation. The complementarity of both approaches (SIL and

Attribute Availability Confidentiality Reliability Integrity Safety Maintainability

Confidence level A1-A4 C1-C4 R1-R4 I1-I4 S1-S4 M1-M4

6.2. Summary of inventoried cross-fertilizations The mutual inspirations included in the above inventory were made on many aspects, varying from architectural concepts (kernel, defence-in-depth, use of diversity) to formal methods, and including risk analysis or test methodologies. Table 3 contains the different adaptations identi19

Assessment Criteria and RUles for Digital Architectures. Security, Safety and Quality Evaluation for Dependable Systems. 21 The Motor Industry Software Reliability Association. 20

18 Generic Upgradeable Architecture for Real-time Dependable Systems.

13

fied in the previous pages, grouped along the categorization used in this survey. For each one, the main bibliographical references are recalled, and its qualification along the taxonomy of Avizienis et al. [12] is also provided (i.e. fault prevention, fault tolerance, fault removal and fault forecasting). One can notice the strong predominance of inspirations from safety to the security field. This can be explained particularly by the greater maturity of the scientific body of knowledge, the first safety analyses having appeared in the first half of the twentieth century (e.g., comparative studies of plane safety with respect to the number of their engines [189]) when the very problem of computer security had not even been defined22 . Having said this, the dynamics and increasing range of computer security problems have led to the development of a scientific and technical community that is now very active, is structuring and is making progress in the field through rigorous and innovative approaches. There is no doubt that the number of adaptations of security techniques to problems specific to safety will increase in the future. A few perspectives on this subject are presented in the next section.

However, even if the formalization of risk analysis methods is more recent in security (see [191] for a survey), their diversity and progress make them a remarkable pool of methodological ideas and approaches, that could certainly be useful to the safety community, if it looked at them in more detail. 7.2. From safety to security 7.2.1. Graphical modeling The adaptation of graphical models derived from safety to security has already led to significant results, particularly through attack trees, as discussed in Section 4.2.1. Having said this, the diversity of safety modeling techniques suggests the possibility of new adaptations that have not yet been considered, with equally promising perspectives. In particular, if the adaptation of deductive approaches has led to threat trees and attack trees, inductive approaches of which the event tree method is typical have rarely been considered in computer security modeling. We have been unable to identify any such usage apart from the work of Smith and Lim in the 1980s [192]. In fact, inductive approaches have received more attention for the terrorist risk [193, 194, 195, 196], and in physical security like in work by Cojazzi et al., in association with attack trees [197]. Moreover, to our knowledge, none of the safety approaches which include both inductive and deductive reasoning (e.g., the Cause-Consequence analysis [198]) has been adapted to security risk. Finally, a significant potential lies also in graphical formalisms based on dynamic models that are more adapted to security problems. The adaptation of BDMP presented in Section 4.2.2 falls in this category.

7. Perspectives Despite the number of mutual inspirations that have already led to successful adaptations, the variety of the toolboxes of the safety and security fields and the existing barriers between the two communities, still leave a broad potential for cross-fertilization. We propose to terminate this paper with a few suggestions of what we believe to be potentially fruitful directions of research. 7.1. From security to safety

7.2.2. Taking account of the human factor As already stated in Section 3.1.4, the importance of the human factor is a strong common point between safety and security. We agree with the observation by Brostoff and Sasse [142]: these aspects have been structured with methodological approaches in safety for many years [199], and this would be very beneficial to security for which they have only been identified much more recently. Apart from the work done by these authors (see Section 4.3.6), we have not identified any explicit use of the know-how acquired in the safety field for computer security. Our bibliographic search suggests that this investigation field has not been explored sufficiently, considering its potential.

7.1.1. Formal modeling of safe behaviors Section 5.2 presented work to adapt formal security policy models to safety models carried out during the 1990s. Although the results obtained at the time were very theoretical, they appeared to be promising and opened up attractive prospects in terms of the composition of properties for the analysis of complex systems. Having said this, little effort seems to have been made since then in this type of theoretical exploration, while security models have continued to change. In our opinion, one interesting line of research would be to adapt more recent formal security models than those considered, to safety.

7.3. Towards safety and security integrated tools

7.1.2. Risk analysis Although some safety risk analysis methods have been adapted to security (e.g. HAZOP, see Section 4.3.1), we have not identified any adaptations in the other direction.

Although explicitly excluded from the scope of the paper (see Section 2.4), approaches allowing a better mastering of safety and security interdependencies will become necessary for industries in which those concerns converge on the same systems and installations [200, 16]. In this area, all the tools extended from one domain to another mentioned in this paper become logically good candidates to provide integrative frameworks to deal with both safety

22 It is generally admitted that modern (i.e. digital) computers have been invented with World War II [190]; information security in its broadest meaning is of course an older concern, the first cryptographical techniques dating back from the Antiquity [42].

14

Table 3: An overall vision of existing cross-fertilizations between safety and security engineering tools and methodologies

Type

Safety-oriented approach Fault-tolerant architectures

Architectural concepts

Graphical modeling

Structured risk assessment

Testing Type Architecture

Defense in depth Fault trees Dynamic fault trees BDMP HAZOP

Sneak circuit analysis Zonal analysis Safety cases FMEA GEMS SIL (Safety Integrity Levels) Fault injection Software Reliability Growth Security-oriented approach Security kernel Misuse case

From safety to security Adaptation to security Intrusion-tolerant architectures FRS technique; survivable networks Diversity-based intrusion detection Defense in depth / Security in depth Threat trees, attack trees Dynamic attack trees BDMP for security HAZOP for security Vulnerability Identification & Analysis HAZOPs Sneak path security analysis Security zonal analysis Security Assurance case IMEA GEMS for security SAL (Security Assurance Levels) Fault injection, Fuzzing Software Security Growth modeling From security to safety Adaptation to safety Safety kernel Misuse case for safety

Graphical modeling

Formal modeling

Misuse sequence diagram Non-interference property . . . Non-deducibility, causality Integrity-oriented access control models (e.g., Biba model)

Failure sequence diagram Safe behaviors formalization (fail-safe, fail-stop. . . ) Model with multiple levels of integrity (“Totel’s model”)

and security. Naturally, such possibility has to be studied carefully on a case-by-case basis, but such an integrative approach has already been followed with promising results by Fovino et al. with fault trees and attack trees [201], and by Pietre-Cambacedes and Bouissou with BDMP [16]. There is little doubt that other techniques mentioned in this paper will be explored in a similar objective, and that they will complete the set of emerging approaches aiming at modeling safety-security interdependencies. The survey of such approaches will be the subject of a future paper.

Main ref. [8, 75, 77, 79] [8, 74] [84] [86, 34] [91, 6] [111] [117, 116] [120] [121, 96] [122] [124, 125] [122] [136, 137, 138] [129, 130] [142] [145, 147] [151] [153]

Category Tolerance

Main ref. [158, 159] [174, 175] [176, 177] [179] [163] [166] [167]

Category Prevention/tolerance Forecasting

Tolerance Forecasting Forecasting Forecasting Forecasting

Forecasting Forecasting Other Forecasting Prevention/removal Prevention/removal Removal/forecasting Forecasting

Forecasting Prevention Prevention

associated to security and intelligent attackers, is taken into account. About risk assessment, the situation is more complex: the domain of safety appears to be more mature and uses mainly probabilistic models. On the contrary, security risk assessment seems less mature and the use of probabilities in this context is still exploratory, and requires good justifications. The difference in maturity is due to the fact that safety studies are older than security studies, and, more importantly, that their hazard perimeter is relatively stable, whereas in the security domain, new threats appear constantly. Nevertheless, some risk assessment methods have been successfully adapted from safety to security. Attack trees (the security counterpart of faulttrees in safety), have already acquired relative recognition. Others are only just emerging (like BDMP) or have not yet been envisaged. The difference in maturity between safety and security is in fact not limited to the sole domain of risk assessment; it explains also that so far, there have been more adaptations from safety to security than in the opposite direction. This said, thanks to the progress of the security discipline, such a tendency may change in the future, and a better equilibrium may be found between the inspirations from one domain to another. In any case, the potential for fruitful cross-fertilization between safety and security remains high.

8. Conclusion In this paper, we began by identifying and then characterizing the main similarities and general differences between safety and security. Even if the risk concept plays a fundamental role in both cases, the different nature of the sources and consequences considered and the historic separation of the communities of these fields have led to the development of different methodological practices and approaches, in spite of their possible synergies. Pioneer works around fault-tolerance concepts started to change this situation some 25 years ago. If the rapprochement between the security and safety communities is still limited today, it has nevertheless already resulted in various adaptations from one domain to the other. This paper has presented a complete survey of such adaptations. In the domain of architectural concepts and testing, the transposition of methods and approaches from safety to security or vice-versa can be relatively straightforward, providing that the larger scope of scenarios to consider,

References [1] E. Schoitsch, Design for safety and security of complex embedded systems: a unified approach, in: Proceedings of the

15

[2]

[3]

[4]

[5]

[6] [7]

[8]

[9]

[10] [11]

[12]

[13] [14]

[15]

[16]

[17] [18]

[19]

[20] [21]

[22]

NATO Advanced Research Workshop on Cyberspace Security and Defense, Gdansk, Poland, 2004, pp. 161–174. M. B. Line, O. Nordland, L. Røstad, I. A. Tøndel, Safety vs. security?, in: Proceedings of the 8th International Conference on Probabilistic Safety Assessment and Management (PSAM 2006), New Orleans, Louisiana, USA, 2006. E. Jonsson, T. Olovsson, On the integration of security and dependability in computer systems, in: Proceedings of the IASTED International Conference on Reliability, Quality Control and Risk Assessment, Washington, D.C, USA, 1992, pp. 93–97. D. F. C. Brewer, Applying security techniques to achieve safety, in: Proceedings of the 3rd Safety-critical Systems Symposium (SSS’93), Bristol, U.K., 1993, pp. 246–256. C. Axelrod, Applying lessons from safety-critical systems to security-critical software, in: Proceedings of the 2011 IEEE Long Island Systems, Applications and Technology Conference (LISAT), Waltham, MA, USA, 2011, pp. 1–6. B. Schneier, Attack trees: Modeling security threats, Dr. Dobb’s Journal 12 (24) (1999) 21–29. J.-C. Laprie, Y. Deswarte, Saturne : syst` emes r´ epartis tol´ erant les fautes et les intrusions, Tech. Rep. 84.023, LAAS, (in French) (Apr. 1984). J. Fraga, D. Powell, A fault and intrusion-tolerant file system, in: Proceedings of the 3rd IFIP International Conference on Computer Security (SEC’85), Ireland, 1985, pp. 203–218. International Atomic Energy Agency (IAEA), Safety glossary: terminology used in nuclear safety and radiation protection, Ref. STI/PUB/1290, 2007 edition. European Network of Transmission System Operators for Electricity, UCTE operation handbook - glossary, v2.2 (Jul. 2004). L. Pi` etre-Cambac´ ed` es, C. Chaudet, The SEMA referential framework: avoiding ambiguities in the terms “security” and “safety”, International Journal of Critical Infrastructure Protection 3 (2) (2010) 55–66. A. Avizienis, J.-C. Laprie, B. Randell, C. Landwehr, Basic concepts and taxonomy of dependable and secure computing, IEEE Transactions on Dependable and Secure Computing 1 (1) (2004) 11–33. N. Leveson, Software safety: Why, what, and how, ACM Computing Surveys 18 (2) (1986) 125–163. International Electrotechnical Commission (IEC), International electrotechnical vocabulary – chapter 191: Dependability and quality of service, IEC 60500-191 and first amendment (Mar. 1999). G. Deleuze, E. Chˆ atelet, P. Lacl´ emence, J. Piwowar, B. Affeltranger, Are safety and security in industrial systems antagonistic or complementary issues?, in: Proceedings of the17th European Safety and Reliability Conference (ESREL’08), Valencia, Spain, 2008. L. Pi` etre-Cambac´ ed` es, M. Bouissou, Modeling safety and security interdepedencies with BDMP (Boolean logic Driven Markov Processes), in: Proceeding of the 2010 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2010), Istanbul, Turkey, 2010, pp. 2852–2861. L. Pi` etre-Cambac´ ed` es, On the relationships between safety and security (in French), Ph.D. thesis, Telecom ParisTech (2010). International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Risk management – vocabulary – guidelines for use in standards, IEC Guide 73 (Jun. 2002). International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Safety aspects – guidelines for their inclusion in standards, ISO/IEC Guide 51, 2nd Edition (Jan. 1999). U.S. Federal Aviation Administration (FAA), FAA System Safety Handbook (Dec. 2000). American Institute of Chemical Engineers (AIChE), Center for Chemical Process Safety (CCPS), Combined glossary of terms (Mar. 2005). Norwegian Oil Industry Association (OLF), Information se-

[23] [24]

[25]

[26] [27]

[28]

[29]

[30]

[31]

[32]

[33] [34]

[35]

[36]

[37]

[38]

[39]

[40] [41] [42]

[43] [44]

[45]

16

curity baseline requirements for process control, safety, and support ICT systems, OLF Guideline No. 104 (Dec. 2006). R. Shirey, Internet security glossary, version 2, Internet Engineering Task Force (IETF), RFC4949 (Aug. 2007). U.S. National Institute of Standards and Technology (NIST), Security controls for federal information systems and organizations, NIST Special Publication 800-53, revision 3 (Aug. 2009). U.S. National Institute of Standards and Technology (NIST), Minimum security requirements for federal information and information systems, FIPS PUB 200 (Mar. 2006). S. Kaplan, B. Garrick, On the quantitative definition of risk, Risk Analysis 1 (1) (1981) 11–27. International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), Information technology – Security techniques – Information security risk management, ISO/IEC 27005 (Jun. 2008). L. Magne, D. Vasseur (Eds.), Risques industriels. Complexit´ e, incertitudes et d´ ecision : une approche interdisciplinaire, TEC & DOC, Collection EDF R&D, Lavoisier, 2006. E. Zio, An introduction to the basics of reliability and risk analysis, Vol. 13 of Series on Quality, Reliablity and Engineering Statistics, World Scientific Publishing, 2007. International Atomic Energy Agency (IAEA), The physical protection of nuclear material and nuclear facilities, INFCIRC/225/Rev.4 (Jun. 1999). J. Jalouneix, P. Cousinou, J. Couturier, D. Winter, Approche comparative entre sˆ uret´ e et s´ ecurit´ e nucl´ eaires (in French), Tech. Rep. 2009/117, Institut de Radioprotection et de Sˆ uret´ e Nucl´ eaire (IRSN) (Apr. 2009). International Atomic Energy Agency (IAEA), Computer security at nuclear facilities, reference manual, Nuclear Security Series No. 17 (Dec. 2011). International Atomic Energy Agency (IAEA), Safety of nuclear power plants: Design, Safety Guide No. NS-R-1 (Sep. 2000). Direction centrale de la s´ ecurit´ e des syst` emes d’information (DCSSI), La d´ efense en profondeur appliqu´ ee aux syst` emes d’information (in French), Memento du SGDN/DCSSI, www. ssi.gouv.fr/IMG/pdf/mementodep-v1-1.pdf (Jul. 2004). R. Johnston, Adversarial safety analysis: Borrowing the methods of security vulnerability assessments, Journal of Safety Research 35 (3) (2004) 245–248. International Electrotechnical Commission (IEC), Functional safety of electrical/electronic/ programmable electronic safetyrelated systems, ed. 2.0, IEC 61508 (2010). International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), Information technology – security techniques – information security management systems, ISO/IEC 27001 (Dec. 2007). International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), Information technology – security techniques – code of practice for information security management, ISO/IEC 27002 (Jun. 2005). S. Tom, D. Christiansen, D. Berrett, Recommended pratice for patch management of control systems, DHS Control System Security Program (CSSP) Recommended Practice (Dec. 2008). B. Schneier, Secrets & Lies: Digital Security in a Networked World, John Wiley & Sons, 2000. J. Viega, G. Mc Graw, Building Secure Software, AddisonWesley, 2002. R. J. Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems, 2nd Edition, John Wiley & Sons, 2008. A. J. Menezes, P. C. van Oorschot, S. A. Vanstone, Handbook of Applied Cryptography, CRC Press, 2001. J. J¨ urjens, Composability of secrecy, in: Proceedings of the 1st International Workshop on Methods, Models, and Architectures for Network Security (MMM-ACNS’01), LNCS 2052, St. Petersburg, Russia, 2001, pp. 28–38. J. Alves-Foss, Computer security aspects of dependable avionics systems, in: Proceedings of National Workshop on Aviation Software Systems: Design for Certifiably Dependable Sys-

[46]

[47] [48]

[49] [50]

[51]

[52] [53]

[54]

[55]

[56] [57] [58]

[59]

[60]

[61] [62]

[63]

[64]

[65]

[66]

[67]

[68]

tems (A Workshop on Research Directions and State of Practice of High Confidence Software Systems) NITRD HCSS-AS, Alexandria, USA, 2006. L. Yang, S. Yang, A framework of security and safety checking for internet-based control systems, International Journal of Computer Security 1 (1/2) (2007) 185–200. J. N. Sorensen, Safety culture: a survey of the state-of-the-art, Reliability Engineering & System Safety 76 (2) (2002) 189–204. BP Texas City, refinery explosion and fire (15 killed, 180 injured), Final Investigation Report 2005-04-I-TX, U.S. Chemical Safety and Hazard Investigation Board (Mar. 2007). C. W. Choo, Information failures and organizational disasters, MIT Sloan Management Review 46 (3) (2005) 7–10. B. Schneier, The psychology of security, in: Proceedings of the 1st International Conference on Cryptology in Africa (AfricaCrypt 2008), Casablanca, Morocco, 2008, pp. 50–79. International Atomic Energy Agency (IAEA), Nuclear security culture, IAEA Nuclear Security Series No. 7, Implementing Guide (2008). K. Mitnick, W. Simon, S. Wozniak, The art of deception, Wiley, 2002. J. Dobson, New security paradigms: what other concepts do we need as well?, in: Proceedings of the 1993 Workshop on New Security Paradigms (NSPW’93), Little Compton, RI, USA, 1993, pp. 7–18. Radio Technical Commission for Aeronautics (RTCA), European Organisation for Civil Aviation Equipment (EUROCAE), Software considerations in airborne systems and equipment certification, DO-178B/ED-12B (Jan. 1992). International Atomic Energy Agency (IAEA), The International Nuclear Event Scale (INES) user’s manual, jointly prepared by IAEA and OECD/NEA (Feb. 2001). B. Schneier, Beyond Fear: Thinking Sensibly About Security in an Uncertain World, Springer, 2003. D. Parker, Risks of risk-based security, Communications of the ACM 50 (3) (2007) 120. T. Aven, A unified framework for risk and vulnerability analysis covering both safety and security, Reliability Engineering & System Safety 92 (6) (2007) 745–754. G. Deleuze, E. Chˆ atelet, L. Pi` etre-Cambac´ ed` es, P. Lacl´ emence, Les paradoxes de la s´ ecurit´ e industrielle (in French), in: Proceedings of the 1st Workshop Interdisciplinaire sur la S´ ecurit´ e Globale (WISG’07), Troyes, France, 2007. D. M. Nicol, W. H. Sanders, K. S. Trivedi, Model-based evaluation: From dependability to security, IEEE Transactions on Dependable and Secure Computing 1 (1) (2004) 48–65. O. Nordland, Safety and security - two sides of the same medal, European CIIP Newsletter (ECN), vol. 3, no. 2 (Jun. 2007). M. Bouissou, Gestion de la complexit´ e dans les ´ etudes quantitatives de sˆ uret´ e de fonctionnement de syst` emes, TEC & DOC, Collection EDF R&D, Lavoisier, 2008. B. Littlewood, Dependability assessment of software-based systems: State of the art, in: Proceedings of the 27th International Conference on Software Engineering (ICS’05), St. Louis, Missouri, USA, 2005, pp. 6–7. B. Littlewood, S. Brocklehurst, N. Fenton, P. Mellor, S. Page, D. Wright, J. Dobson, J. McDermid, D. Gollmann, Towards operational measures of computer security, Journal of Computer Security 2 (1993) 211–229. G. Apostolakis, D. Lemon, A screening methodology for the identification and ranking of infrastructure vulnerabilities due to terrorism, Risk Analysis 25 (2) (2005) 361–376. I. Bloch, Incertitude, impr´ ecision et additivit´ e en fusion de donn´ ees : point de vue historique, Traitement du Signal 13 (4) (1996) 267–288. M. Marseguerra, E. Zio, Basics of the Monte-Carlo method with application to system reliability, LiLoLe Publishing, Hagen, Germany (2002). D. E. Peplow, C. D. Sulfredge, R. L. Sanders, R. H. Morris, T. A. Hann, Calculating nuclear power plant vulnerability using integrated geometry and event/fault-tree models, Nuclear

Science and Engineering 146 (1) (2004) 71–87. [69] C. R. H. Morris, D. Sulfredge, R. L. Sanders, H. S. Rydell, Using the VISAC program to calculate the vulnerability of nuclear power plants to terrorism, International Journal of Nuclear Governance, Economy and Ecology 1 (2) (2006) 193–211. [70] G. Cojazzi, S. Contini, G. Renda, On the need of exact probabilistic quantification in ET/FT analysis, in: Proceedings of the14th European Safety and Reliability Conference (ESREL’05), Tri City, Poland, 2005, pp. 399–405. [71] S. Contini, G. Cojazzi, G. Renda, On the use of non-coherent fault trees in safety and security studies, Reliability Engineering & System Safety 93 (12) (2008) 1886–1895. [72] J. Arlat, Y. Crouzet, Y. Deswarte, J.-C. Fabre, J.-C. Laprie, D. Powell, Encyclop´ edie de l’informatique et des syst` emes d’information, Vuibert, Paris, France, 2006, Ch. Tol´ erance aux fautes, pp. 240–270. [73] Y. Koga, E. Fukushima, K. Yoshihara, Error recoverable and securable data communication for computer network, in: Proceedings of the 12th IEEE International Symposium on FaultTolerant Computing (FTCS-12), USA, 1982, pp. 183–186. [74] Y. Deswarte, L. Blain, J.-C. Fabre, Intrusion tolerance in distributed systems, in: Proceedings of the 1991 IEEE Symposium on Research in Security and Privacy (S&P’91), Oakland, USA, 1991, pp. 110–121. [75] J. E. Dobson, B. Randell, Building reliable secure computing systems out of unreliable insecure components, in: Proceedings of the 1986 IEEE Symposium on Security and Privacy (S&P’86), Oakland, California, USA, 1986, pp. 187–193. [76] C. Meadows, Applying the dependability paradigm to computer security, in: Proceedings of the 1995 Workshop on New Security Paradigms (NSPW’95), USA, 1995, pp. 75–79. [77] P. Ver´ıssimo, N. Neves, M. Correia, Architecting Dependable Systems, LNCS 2677, Springer, 2003, Ch. Intrusion-Tolerant Architectures: Concepts and Design, pp. 3–36. [78] D. Powell, A. Adelsbasch, C. Cachin, S. Creese, M. Dacier, Y. Deswarte, T. McCutcheon, N. Neves, B. Pfitzmann, B. Randell, R. Stroud, P. Verssimo, M. Waidner, MAFTIA (Malicious- and Accidental-Fault Tolerance for Internet Applications), in: Proceedings of the 31st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2001), Supplemental Volume, G¨ oteborg, Sweden, 2001, pp. 32–35. [79] Y. Deswarte, D. Powell, Intrusion tolerance for internet applications, in: Proceedings of the IFIP World Computer Congress, Vol. IFIP 156/2004, Toulouse, France, 2004, pp. 241–256. [80] R. J. Ellison, D. A. Fisher, R. C. Linger, H. F. Lipson, T. Longstaff, N. R. Mead, Survivable network systems: An emerging discipline, Tech. Rep. CMU/SEI-97-TR-013, Carnegie Mellon University (May 1997). [81] P. Veriss´ımo, N. Neves, M. Correia, The CRUTIAL reference critical information infrastructure architecture: a blueprint, International Journal of System of Systems Engineering 1 (12) (2008) 78–95. [82] P. Sousa, A. Bessani, W. Dantas, F. Souto, M. Correia, N. Neves, Intrusion-tolerant self-healing devices for critical infrastructure protection, in: Proceedings of the 39th Annual IEEE/IFIP Int. Conference on Dependable Systems and Networks (DSN 2009), Estoril, Portugal, 2009, pp. 217–222. [83] B. Littlewood, L. Strigini, Redundancy and diversity in security, in: Proceedings of the European Symposium on Research in Computer Security (ESORICS’04), LNCS 3193, Sophia Antipolis, France, 2004, pp. 423–438. ´ Totel, F. Majorczyk, L. M´ [84] E. e, COTS diversity based intrusion detection and application to Web servers, in: Proceedings of the 8th International Symposium on Recent Advances in Intrusion Detection (RAID’05), Seattle, USA, 2005, pp. 43–62. [85] International Atomic Energy Agency (IAEA), International Nuclear Safety Group (INSAG), Defence in depth in nuclear safety, INSAG-10, STI/PUB/1013 (1996). [86] U.S. National Security Agency (NSA), Defense in depth, a

17

[87]

[88]

[89] [90]

[91]

[92]

[93]

[94]

[95] [96]

[97]

[98]

[99]

[100]

[101]

[102]

[103]

[104]

[105]

[106]

practical strategy for achieving information assurance in today’s highly networked environments, Guide NSA, Information Assurance Mission, http://www.nsa.gov/ia/_files/support/ defenseindepth.pdf. K. Dauch, A. Hovak, R. Nestler, Information assurance using a defense in-depth strategy, in: Proceedings of the Cybersecurity Applications & Technology Conference for Homeland Security (CATCH), Washington, DC, USA, 2009, pp. 267–272. UK Centre for the Protection of National Infrastructure (CPNI), Process Control and SCADA Security, good practice guide, version 2 (8 parts series) (Jun. 2008). J. Reason, Human Error, Cambridge University Press, 1990. C. Salter, O. S. Saydjari, B. Schneier, J. Wallner, Toward a secure system engineering methodology, in: Proceedings of the 1998 Workshop on New Security Paradigms (NSPW ’98), Charlottesville, Virginia, United States, 1998, pp. 2–10. J. D. Weiss, A system security engineering process, in: Proceedings of the 14th National Computer Security Conference (NCSC), Washington D.C., USA, 1991, pp. 572–581. E. Byres, M. Franz, D. Miller, The use of attack trees in assessing vulnerabilities in SCADA systems, in: Proceedings of the International Infrastructure Survivability Workshop (IISW 2004), Lisbon, Portugal, 2004. S. C. Patel, J. H. Graham, P. A. Ralston, Quantitatively assessing the vulnerability of critical information systems: A new method for evaluating security enhancements, International Journal of Information Management 28 (6) (2008) 483–491. C.-W. Ten, C.-C. Liu, M. Govindarasu, Vulnerability assessment of cybersecurity for SCADA systems using attack trees, in: Proceedings of the IEEE Power Engineering Society General Meeting, Tampa, USA, 2007, pp. 1–8. S. Convery, D. Cook, M. Franz, An attack tree for the border gateway protocol, IETF Internet Draft (Feb. 2004). N. L. Foster, The application of software and safety engineering techniques to security protocol development, Ph.D. thesis, University of York (2002). K. Edge, R. Raines, M. Grimaila, R. Baldwin, R. Bennington, C. Reuter, The use of attack and protection trees to analyze security for an online banking system, in: Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS-40), Hawai, USA, 2007, p. 144b. A. Buldas, T. M¨ agi, Practical security analysis of e-voting systems, in: Advances in Information and Computer Security, Proceedings of the 2nd International Workshop on Security (IWSEC), LNCS 4752, Nara, Japan, 2007, pp. 320–335. K. Karppinen, Security measurement based on attack trees in a mobile ad hoc network environment, Master’s thesis, VTT and University of Oulu (2005). S. McLaughlin, P. McDaniel, D. Podkuiko, Energy theft in the advanced metering infrastructure, in: Proceedings of the 4th International Workshop on Critical Information Infrastructure Security (CRITIS’09), Bonn, Germany, 2009. S. McLaughlin, D. Podkuiko, S. Miadzvezhanka, A. Delozier, P. McDaniel, Multi-vendor penetration testing in the advanced metering infrastructure, in: Proceedings of the 26th Annual Computer Security Applications Conference(ACSAC), Austin, Texas, USA, 2010, pp. 107–116. J. H. Espedalen, Attack trees describing security in distributed internet-enabled metrology, Master’s thesis, Gjvik University (2007). S. Evans, D. Heinbuch, E. Kyule, J. Piorkowski, J. Wallner, Risk-based systems security engineering: stopping attacks with intention, IEEE Security and Privacy 2 (6) (2004) 59–62. N. Mead, E. Hough, T. Stehney, Security quality requirements engineering (SQUARE) methodology, Tech. Rep. CMU/SEI2005-TR-009, Carnegie Mellon University (2005). S. Bistarelli, F. Fioravanti, P. Peretti, Defense trees for economic evaluation of security investments, in: Proceedings of the 1st International Conference on Availability, Reliability and Security (ARES’06), Vienna, Austria, 2006, pp. 416–423. B. Kordy, S. Mauw, S. Radomirovi´ c, P. Schweitzer, Founda-

[107]

[108]

[109]

[110]

[111]

[112]

[113]

[114]

[115]

[116]

[117]

[118]

[119]

[120]

[121]

[122] [123]

18

tions of attack–defense trees, in: Proceedings of the 7th International Workshop on Formal Aspects of Security & Trust (FAST2010), LNCS6561, Pisa, Italy, 2010, pp. 80–95. R. R. Yager, OWA trees and their role in security modeling using attack trees, Information Sciences 176 (20) (2006) 2933– 2959. S. Bistarelli, M. Dall’Aglio, P. Peretti, Strategic games on defense trees, in: Proceedings of the 4th International Workshop on Formal Aspects in Security and Trust (FAST 2006), LNCS 4691, Hamilton, Ontario, Canada, 2006, pp. 1–15. A. Buldas, P. Laud, J. Priisalu, M. Saarepera, J. Willemson, Rational choice of security measures via multi-parameter attack trees, in: Proceedings of the 1st International Workshop on Critical Information Infrastructure Security (CRITIS’06), LNCS 4347, Samos Island, Greece, 2006, pp. 235–248. A. J¨ urgenson, J. Willemson, Computing exact outcomes of multi-parameter attack trees, in: Proceedings of the Confederated International Conferences on the Move to Meaningful Internet Systems (OTM 2008/IS 2008), LNCS 5332, Monterrey, Mexico, 2008, pp. 1036–1051. P. A. Khand, System level security modeling using attack trees, in: Proceedings of the 2nd International Conference on Computer, Control and Communication (IC4), Karachi, Pakistan, 2009, pp. 1–6. S. A. Camtepe, B. Yener, Modeling and detection of complex attacks, in: Proceedings of the 3rd International Conference on Security and Privacy in Communications Networks (SecureComm 2007), Nice, France, 2007, pp. 234–243. M. Bouissou, J.-L. Bon, A new formalism that combines advantages of fault-trees and Markov models: Boolean logic driven Markov processes, Reliability Engineering & System Safety 82 (2) (2003) 149–163. J. Pestourie, G. Malarange, E. Breton, S. Muffat, M. Bouissou, ´ Etude de la sˆ uret´ e de fonctionnement d’un poste source EDF (90/20 kV) avec le logiciel OPALE (in French), in: Proceedings of the 14th Congress on Reliability and Maintainability of the lMdR (λµ14), Bourges, France, 2004. P. Carer, J. Bellvis, M. Bouissou, J. Domergue, J. Pestourie, A new method for reliability assessment of electrical power supplies with standby redundancies, in: Proceedings of the 7th International Conference on Probabilistic Methods Applied to Power Systems (PMAPS’02), Napoly, Italy, 2002. L. Pi` etre-Cambac´ ed` es, M. Bouissou, Attack and defense dynamic modeling with BDMP, in: Proceedings of the 5th International Conference on Mathematical Methods, Models, and Architectures for Computer Networks Security (MMM-ACNS2010), LNCS 6258, St Petersburg, Russia, 2010, pp. 86–101. L. Pi` etre-Cambac´ ed` es, M. Bouissou, Beyond attack trees: dynamic security modeling with Boolean logic Driven Markov Processes (BDMP), in: Proceedings of the 8th European Dependable Computing Conference (EDCC-8), Valencia, Spain, 2010, pp. 199–208. The Chemical Industry Safety and Health Council of the Chemical Industries Association (CISHEC), A guide to hazard and operability studies (1977). T. Kletz, HAZOP and HAZAN: Identifying and Assessing Process Industry Hazards, 4th Edition, Institution of Chemical Engineers, 2002. R. Winther, O.-A. Johnsen, B. A. Gran, Security assessments of safety critical systems using HAZOPs, in: Proceedings of the 20th International Conference on Computer Safety, Reliability and Security (SAFECOMP 2001), LNCS 2187, Budapest, Hungary, 2001, pp. 14–24. N. Foster, J. Jacob, Hazard analysis for security protocol requirements, in: Proceedings of the 1st International IFIP Working Conference on Network Security, Leuven, Belgium, 2001, pp. 75–92. T. Srivatanakul, Security analysis with deviational techniques, Ph.D. thesis, University of York (2005). J. P. Rankin, C. F. White, Sneak circuit analysis handbook, Tech. Rep. D2-118341-1 / NASA-CR-108721, U.S. National

Aeronautics and Space Administration (NASA) (1970). [124] P. Baybutt, Sneak path analysis (SPSA) for industrial cyber security, Tech. rep., Primatech (2003). [125] P. Baybutt, Sneak path analysis: Security application finds cyber threats, then works to protect a system, ISA InTech 51 (4). [126] Society of Automotive Engineers (SAE), Guidelines and methods for conducting the safety assessment process on civil airborne systems and equipment, ARP4761 (Dec. 1996). [127] International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), Analysis techniques for system reliability – procedure for failure mode and effects analysis (FMEA), IEC/ISO 60812 (Jan. 2006). [128] J. Aagedal, F. Den Braber, T. Dimitrakos, B. Gran, D. Raptis, K. Stolen, Model-based risk assessment to improve enterprise security, in: Proceedings of the 6th International Enterprise Distributed Object Computing Conference, 2002.(EDOC’02), Lausanne, Switzerland, 2002, pp. 51–62. [129] A. Gorbenko, V. Kharchenko, O. Tarasyuk, A. Furmanov, Rigorous Development of Complex Fault-Tolerant Systems (LNCS 4157), Springer, 2006, Ch. F(I)MEA-Technique of Web-services Analysis and Dependability Ensuring, pp. 153– 167. [130] E. Babeshko, V. Kharchenko, A. Gorbenko, Applying F(I)MEA-technique for SCADA-based industrial control systems dependability assessment and ensuring, in: Proceeding of the 3rd International Conference on Dependability of Computer Systems (DepCoS-RELCOMEX), Szklarska Poreba, Poland, 2008, pp. 309–315. [131] J. R. Inge, The safety case, its development and use in the United Kingdom, in: Proceedings of the 25th International System Safety Conference (ISSC), Baltimore, MD, USA, 2007, pp. 725–730. [132] U.K. Ministry of Defence (MoD), Directorate of Standardization, Safety management requirements for defence systems – part 1 – requirements, MoD-Def-Stan-00-56/1 (Jun. 2007). [133] P. G. Bishop, R. E. Bloomfield, A methodology for safety case development, in: Proceedgins of the 6th Safety-critical Systems Symposium (SSS’98), Birmingham, UK, 1998. [134] R. Bloomfield, P. Bishop, Safety and assurance cases: Past, present and possible future - an Adelard perspective, in: Proceedings of the 18th Safety-Critical Systems Symposium (SSS 2010), Bristol, UK, 2010, pp. 51–67. [135] R. Weaver, The safety of software: Constructing and assuring arguments, Ph.D. thesis, University of York (2003). [136] R. E. Bloomfield, S. Guerra, M. Masera, A. Miller, C. B. Weinstock, International working group on assurance cases (for security), IEEE Security & Privacy 4 (3) (2006) 66–68. [137] J. Goodenough, H. Lipson, C. Weinstock, Arguing security creating security assurance cases, On-line report (U.S. CERT Build Security-in), Carnegie Mellon University (Jan. 2007). [138] H. Lipson, C. Weinstock, Evidence of assurance: Laying the foundation for a credible security case, On-line report (U.S. CERT Build Security-in), Carnegie Mellon University (May 2008). [139] T. P. Kelly, Arguing safety - a systematic approach to managing safety cases, Ph.D. thesis, University of York (1998). [140] K. Moleyar, A. Miller, Formalizing attack trees for a SCADA system, in: 1st Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection (CIP 2007), Hanover, New Hampshire, USA, 2007, short paper track (no proceedings, available on-line). [141] R. Alexander, R. Hawkins, T. Kelly, Security assurance cases: Motivation and the state of the art, ref. CESG/TR/2011/1, Tech. rep., High Integrity Systems Engineering Group, University of York (2011). [142] S. Brostoff, M. A. Sasse, Safe and sound: a safety-critical approach to security, in: Proceedings of the 2001 Workshop on New Security Paradigms (NSPW’01), Cloudcrofl, New Mexico, USA, 2001, pp. 41–51. [143] U.K. Ministry of Defence (MoD), Directorate of Standard-

[144]

[145]

[146]

[147]

[148]

[149] [150]

[151]

[152] [153]

[154]

[155]

[156]

[157] [158]

[159]

[160] [161]

[162] [163]

[164]

19

ization, Requirements for safety-related software in defense equirment - part 1 - requirements, Interim MoD-Def-Stan-0055(Part 1)/Issue 1 (Apr. 1991). International Society of Automation (ISA), Application of safety instrumented systems for the process industries, ISA– 84.01-1996 (Apr. 1996). N. Kube, B. Singer, Security assurance levels: a SIL approach to security, in: Procdeedings of the 2nd SCADA Security Scientific Symposium (S4), Miamia, USA, 2008. J. D. Gilsinn, R. Schierholz, Security assurance levels: A vector approach to describing security requirements, in: Proceedings of the US DHS Industrial Control Systems Joint Working Group (ICSJWG) 2010 Fall Conference, Seattle, USA, 2010. International Society of Automation (ISA), Security for industrial automation and control systems: System security requirements and security assurance levels, ISA99.03.03, to be published (2012). I. ECSI, Embedded Device Security Assurance (EDSA) brochure, http://www.isasecure.org/PDFs/ ISASecure-EDSA-Certification-March-2010.aspx (Mar. 2010). P. Gruhn, Safety, security groups form joint working group, ISA InTech (Jun. 2009). J. Arlat, Validation de la sˆ uret´ e de fonctionnement par injection de fautes, m´ ethode - mise en oeuvre - application, Th` ese d’´ etat, LAAS (1990). J. R. Voas, Testing software for characteristics other than correctness: Safety, failure tolerance, and security, in: Proceedings of the 10th International Conference on Testing Computer Software, Washington, D.C., USA, 1996. M. Sutton, A. Greene, P. Amini, Fuzzing: Brute Force Vulnerability Discovery, Addison-Wesley Professional, 2007. A. Ozment, Software security growth modeling: Examining vulnerabilities with reliability growth models, in: Proceedings of the 1st Workshop on Quality of Protection (QoP’05), Milan, Italy, 2005, pp. 25–36. E. Rescorla, Is finding security holes a good idea?, in: Proceedings of the 3rd Workshop on Economics and Information Security (WEIS’04), Minneapolis, Minnesota, USA, 2004. M. D. Schroeder, Engineering a security kernel for Multics, in: Proceedings of the 5th ACM Symposium on Operating Systems Principles, Austin, Texas, United States, 1975, pp. 25–32. S. R. J. Ames, M. Gasser, R. R. Schell, Security kernel design and implementation: An introduction, Computer 16 (7) (1983) 14–22. M. Bishop, Computer Security: Art and Science, Addison Wesley Professional, 2003. J. Rushby, Kernels for safety?, in: Proceedgins of the Safetycritical Systems Symposium (SSS’86), Glasgow, U.K., 1986, pp. 210–220. N. G. Leveson, T. J. Shimeall, J. L. Stolzy, J. C. Thomas, Design for safe software, in: Proceedings of the 21st Aerospace Sciences Meeting of the American Institute of Aeronautics and Astronautics, Reno, USA, 1983, pp. 10–13. K. G. Wika, J. C. Knight, A safety kernel architecture, Tech. Rep. CS-94-04, University of Virginia (1994). J.-L. Boulanger, V. Delebarre, S. Natkin, J. Ozello, Deriving safety properties of critical software from the system risk analysis, application to ground transportation systems, in: Proceedings of the 2nd IEEE High-Assurance Systems Engineering Workshop (HASE’97), Washington, DC, USA, 1997, pp. 162–167. K. Biba, Integrity considerations for secure computer systems, Tech. Rep. ESD-TR 76-372, The MITRE Corporation (1977). A. Simpson, J. Woodcock, J. Davies, Safety through security, in: Proceedings of the 9th International Workshop on Software Specification and Design (IWSSD ’98), Japan, 1998, pp. 18–24. J. Goguen, J. Meseguer, Security policies and security models, in: Proceedings of the IEEE Symposium on Security and Privacy (S&P’82), Oakland, USA, 1982, pp. 11–20.

[165] A. W. Roscoe, The theory and practice of concurrency, Prentice Hall, 1998. [166] V. Stavridou, B. Dutertre, From security to safety and back, in: Proceedings of the Computer Security, Dependability, and Assurance: From Needs to Solutions (CSDA’98), York, UK, 1998, pp. 182–195. ´ Totel, J.-P. Blanquart, Y. Deswarte, D. Powell, Supporting [167] E. multiple levels of criticality, in: Proceedings of the 28th IEEE Symposium on Fault Tolerant Computing Systems (FTCS-28), Munich, Germany, 1998, pp. 70–79. [168] D. Powell, J. Arlat, L. Beus-Dukic, A. Bondavalli, P. Coppola, A. Fantechi, E. Jenn, C. Rab´ ejac, A. Wellings, GUARDS: a generic upgradable architecture for real-time dependable systems, IEEE Transactions on Parallel and Distributed Systems 10 (6) (1999) 580–599. [169] Y. Laarouchi, S´ ecurit´ es (immunit´ e et innocuit´ e) des architectures ouvertes a ` niveaux de criticit´ e multiples : application en avionique, Ph.D. thesis, Institut National des Sciences Appliqu´ ees (INSA) de Toulouse et Laboratoire d’Analyse et d’Architecture des Syst` emes du CNRS (LAAS) (2009). [170] Y. Laarouchi, Y. Deswarte, D. Powell, J. Arlat, E. de Nadai, Connecting commercial computer to avionics systems, in: Proceedings of the 28th Digital Avionics System Conference (DASC’09), Florida, USA, 2009, pp. 6.D.1.1–9. [171] J. McDermott, C. Fox, Using abuse case models for security requirements analysis, in: Proceedings of the 15th Annual Computer Security Applications Conference (ACSAC’99), Phoenix, USA, 1999, pp. 55–64. [172] D. J. Firesmith, Security use cases, Journal of Object Technology 2 (3) (2003) 53–64. [173] G. Sindre, Opdahl, A. Lothe, Eliciting security requirements by misuse cases, in: Proceedings of 37th International Conference on Technology of Object-Oriented Languages and Systems (TOOLS-PACIFIC 2000), Sydney, Australia, 2000, pp. 120–131. [174] G. Sindre, Situational Method Engineering: Fundamentals and Experiences, Vol. 244/2007 of IFIP International Federation for Information Processing, Springer Boston, 2007, Ch. A Look at Misuse Cases for Safety Concerns, pp. 252–266. [175] I. Alexander, Initial industrial experience of misuse cases in trade-off analysis, in: Proceedings of the 10th Anniversary IEEE Joint International Requirements Engineering Conference (RE’02), Essen, Germany, 2002, pp. 61 –70. [176] I. Alexander, Misuse cases: Use cases with hostile intent, IEEE Software 20 (1) (2003) 58–66. [177] T. St˚ alhane, G. Sindre, A comparison of two approaches to safety analysis based on use cases, in: Proceedings of the 26th International Conference on Conceptual Modeling (ER 2007), LNCS 4801, Auckland, New Zealand, 2007, pp. 423–437. [178] V. Katta, P. Karpati, A. Opdahl, C. Raspotnig, G. Sindre, Comparing two techniques for intrusion visualization, in: Proceedings of the 3rd IFIP WG8.1 Working Conference on the Practice of Enterprise Modelling (PoEM 2010), Delft, Netherlands, 2010, pp. 1–15. [179] C. Raspotnig, A. Opdahl, Improving security and safety modelling with failure sequence diagrams, International Journal of Secure Software Engineering (IJSSE)To be published. [180] International Standardization Organisation (ISO), Information technology – security techniques – evaluation criteria for IT security, IEC 15408, part 1 to 3, edition 3.0 (2008 to 2009). [181] J. Ridgway, Achieving safety through security management, in: Proceedings of the 15th Safety-Critical Systems Symposium (SSS 2007), Bristol, UK, 2007, pp. 3–20. [182] T. Novak, A. Treytl, Common approach to functional safety and system security in building automation and control systems, in: Proceedings of the 12th IEEE Conference on Emerging Technologies and Factory Automation (ETFA’07), Patras, Greece, 2007, pp. 1141–1148. [183] T. Novak, A. Treytl, A. Gerstinger, Embedded security in safety critical automation systems, in: Proceedings of the 26th International System Safety Conference (ISSC 2008), Vancou-

ver, Canada, 2008, pp. S.1–11. [184] T. Novak, A. Treytl, Functional safety and system security in automation systems - a life cycle model, in: Proceedings of the 13th IEEE Conference on Emerging Technologies and Factory Automation (ETFA’08), Hamburg, Germany, 2008, pp. 311– 318. [185] Y. Deswarte, M. Kaˆ aniche, P. Corneillie, J. Goodson, SQUALE dependability assessment criteria, in: Proceedings of the 18th International Conference on Computer Safety, Reliability and Security (SAFECOMP’99), LNCS1698, Toulouse, France, 1999, pp. 27–38. [186] P. Corneillie, S. Moreau, C. Valentin, J. Goodson, A. Hawes, T. Manning, H. Kurth, G. Liebisch, A. Steinacker, Y. Deswarte, M. Kaˆ aaniche, P. Benoit, Dependability assessment criteria, SQUALE project (ACTS95/AC097), Tech. Rep. 98456, Laboratoire d’Analyse et d’Architecture des Syst` emes du CNRS (LAAS) (Jan. 1999). [187] P. H. Jesty, D. D. Ward, Towards a unified approach to safety and security in automotive systems, in: Proceedings of the 15th Safety-Critical Systems Symposium (SSS 2007), Bristol, UK, 2007, pp. 21–34. [188] J. Alves-Foss, B. Rinker, C. Taylor, Towards Common Criteria certification for DO-178B compliant airborne software systems, Univsersity of Idaho (2002). [189] M. Rausand, A. Høyland, System Reliability Theory: Models and Statistical Methods, 2nd Edition, Wiley, 2004. [190] B. Carlson, A. Burgess, C. Miller, Timeline of Computing History, IEEE Computer Society, http://www.computer.org/ cms/Computer.org/Publications/timeline.pdf (1996). [191] European Network and Information Security Agency (ENISA), Survey on risk management methodologies and tools, http://www.enisa.europa.eu/act/rm/cr/ risk-management-inventory. [192] S. T. Smith, J. J. Lim, Risk analysis in computer systems - an automated procedure, Information Age 7 (1) (1985) 15–18. [193] B. J. Garrick, J. Hall, M. Kilger, J. Mcdonald, T. O’Toole, P. Probst, E. R. Parker, R. Rosenthal, A. Trivelpiece, L. Van Arsdale, E. L. Zebroski, Confronting the risks of terrorism: making the right decisions, Reliability Engineering & System Safety 86 (2) (2004) 129–176. [194] R. Wilson, Combating terrorism: an event tree approach, in: Proceedings of the 27th International Seminar on Nuclear War and Planetary Emergencies, Erice, Italy, 2002, pp. 122–145. [195] G. Woo, Quantitative terrorism risk assessment, The Journal of Risk Finance 4 (1) (2002) 7–14. [196] Y. Y. Haimes, Accident precursors, terrorist attacks, and systems engineering, Prepared for Presentation at the NAE Workshop: NAE Project on Accident Precursors (Jul. 2003). [197] G. Cojazzi, S. Contini, G. Renda, FT analysis in security related applications: Challenges and needs, in: Proceedings of the 29th ESReDA Seminar on Systems Analysis for a More Secure World: Application of System Analysis and RAMS to Security of Complex System, Ispra, Italy, 2005, pp. 345–366. [198] D. S. Nielsen, The cause-consequence diagram method as a basis for quantitative accident analysis, Tech. Rep. RIS-M1374, Danish Atomic Energy Commission, Denmark (1971). [199] J. Bell, J. Holroyd, Review of human reliability assessment methods, Research report RR679, Health and Safety Laboratory, Health and Safety Executive (2009). [200] O. Nordland, Making safe software secure, in: Proceedings of the 16th Safety-Critical Systems Symposium (SSS 2008) (Improvements in System Safety), Bristol, UK, 2008, pp. 15–23. [201] I. N. Fovino, M. Masera, A. De Cian, Integrating cyber attacks within fault trees, Reliability Engineering & System Safety 94 (9) (2009) 1394–1402.

20

Cross-fertilization between safety and security ...

Nov 7, 2012 - *Corresponding author. Email addresses: [email protected] ... security, but rather to make their meanings and respective ... Link with the concept of dependability. Safety and ...... knowledge/assurance.html. 11 ...

622KB Sizes 1 Downloads 127 Views

Recommend Documents

Safety and Security Procedures.pdf
Retrying... Whoops! There was a problem loading this page. Retrying... Safety and Security Procedures.pdf. Safety and Security Procedures.pdf. Open. Extract.

2015 Campus Security and Fire Safety Report.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 2015 Campus ...

Engineering Safety- and Security-Related Requirements for Software ...
Feb 5, 2007 - the engineering discipline within systems/software engineering ..... Safety and. Security. Engineering. Event. Analysis. Danger. Analysis. Risk.

Study On Network Security: Threats and Safety - IJRIT
proxy server is a gateway from one network to another for a specific network .... Securing the network from various threats is stopping the biggest cybercrime ...

Engineering Safety and Security Related Requirements ...
Many software-intensive systems have significant safety and security ramifications and need to have their associated safety- and security-related requirements ...

8th Certified Campus Security and Safety Manager Course.pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 8th Certified ...

2014 Campus Security and Fire Safety Report.pdf
There was a problem loading more pages. Retrying... 2014 Campus Security and Fire Safety Report.pdf. 2014 Campus Security and Fire Safety Report.pdf.

Study On Network Security: Threats and Safety - IJRIT
Security of network is important as it contains those data which if gets into unauthorized person's .... process involved in the data transmission. ... They can provide real time protection against the installation of malware software on a computer.