Quantitative and Qualitative Risk in IT Portfolio Management Alexander Holland and Madjid Fathi 

Abstract - The key point of this paper is the proactive management of the whole risk of an IT project portfolio. A portfolio is collection of projects and every project implicates its own risk. Project risks describes how uncertainty and risk are identified, analyzed and cumulated for a single project. The most attention is paid to the quantification of risk that is based on fundamental decision theory. IT Project Portfolio Management clarifies the sense and the benefit of a portfolio for IT projects. Especially topics like implementation of a portfolio and measurement criteria are fundamental discussion points. Portfolio management including risk as measurement tries to figure out how the whole risk for a portfolio could be made up of the single project risks. Especially inter-dependencies and correlations across the projects are considered. Also addressed is the issue of how to minimize the whole risk of an IT project portfolio by diversification. Proactive risk management and mitigation discusses all opportunities for risk mitigation. that includes reduction of the probability that a risk will materialize or reduction of the impact on the business of a risk event if it does occur.

T

I. INTRODUCTION

he importance of Information Technology (IT) to the success of companies is obvious. There are a lot of advantages that could be achieved through IT. But it is also known that companies tie up a lot of money in IT and that bad decisions in this business can lead to bankruptcy. IT is linked with hidden costs and value that is difficult to measure. Retrieving key capital market data like return on investment (ROI), payback period or total cost of ownership (TCO) is a challenging task. CIO’s and other IT management leaders have to justify the business value of IT. Especially large-scale enterprises do have a wide IT landscape with a great number of projects that are running at the same time and do have inter-dependencies. IT management leaders of such big companies require a useful tool or framework where they can get an overview of all the strategic projects. Recently the IT portfolio management process and framework has become a must-have for most companies - a practical value revenue generation and cost reduction approach that works, enabling companies a basis for decision making while having visibility and control of their projects across their organizations. There

Alexander Holland is with the Institute of Knowledge Based Systems and Knowledge Management, University of Siegen, 57068 Siegen, Germany (phone: +49-271-740-2276; fax: +49-271-740-2322; e-mail: alex@ informatik.uni-siegen.de). Madjid Fathi is with the Institute of Knowledge Based Systems and Knowledge Management, University of Siegen, 57068 Siegen, Germany (phone: +49-271-740-2311; fax: +49-271-740-2322; e-mail: fathi@ informatik.uni-siegen.de).

1-4244-0991-8/07/$25.00/©2007 IEEE

are a lot of possible decision criteria that allow executives to select and prioritize projects. Not only financial measures such as ROI and net present value (NPV) should be considered. Also measures that go beyond the financial objectives are very important and should be mentioned, e.g. business and strategic fit, customer needs or risks versus return. II. IT PROJECT RISK The future is associated with surprises. Sometimes the surprises are desired but often they are unpleasant. Many people are looking for ways to protect themselves from the unpleasant surprises. They are willing to pay for protection against risk. Risk in the context of an IT project may be defined as the chance of an unintended future event with potential negative impact on the business. It includes the possibility of loss or gain, or a variation from a desired or expected outcome. Identifying and anticipating events that could result only in a positive or desired outcome is a matter of an opportunity not a risk. Opportunity management is the opposite of risk management. Different people do have a different understanding of risk [13]. In all branches of science or engineering risk is seen as variation in the distribution of possible outcomes based on the classical decision theory. This comprehension allows the risks identified to be quantified, calculated, accumulated and compared. But most executives do have another view of risk. They do not treat uncertainty about positive outcomes as an important aspect of risk. Only negative outcomes are relevant. Risk is associated more with a danger or hazard. Manager characterizes risks verbally and qualitatively. Numerically quantification of risk is very difficult. It is too complex for them to figure out possible outcomes and estimate values and likelihoods. Actually assessment of likelihood is seen as more undetermined, so they focus on outcome value estimates instead. Executives should adopt the engineers’ viewpoints on risk. By applying the classical decision theory and its analytical methods, project managers are capable of managing risks, because the analytical approach needs the risks to be quantified and enables the purposeful evaluation of the best methods to control them [14]. A. Sources of Project Risks There are three sources of project risk. The standard project risk management (PRM) methods deal with foreseeable uncertainty and residual risks. This is the first source of project risk and part and parcel of this thesis. The nature of

3840

projects is that they tend to be one-time executions and they have the property of being individual. But through similarity between projects accompanied by the experience and the know-how gained in former projects it is possible to design reliable plans for new projects. It is like operating on known terrain, where risks can be identified and managed accordingly and proactively [15]. Of course there are still the residual risks that are left over after planning for foreseeable events. Residual risks arise unexpectedly during the project execution and only improvisation can help in such situations. By dedicating special resources you can prepare for improvisation. Another source of project risk to be mentioned here is project complexity. Complexity is the result of a high number of components or parts and a high number of interactions between these parts [16]. The complex relationship structure makes it difficult to figure out the consequences of deviations or changes in single parts. To handle project complexity, two existing methods are given: The control-and-fast-response method indicates mutual adjustment of system elements deviating, with the goal to keep the system in a control state. Another method for complexity arising from the relationship structure is the use of contracts as a risk sharing tool. The last source of project risk is the unforeseeable uncertainty that is called the ”unknown unknowns” in the engineering community. Especially novel projects, that apply innovative technologies and pursue new markets, are accompanied by fundamentally unforeseeable events and unknown interactions among different parts of the project. Novel projects can not be planned and the application of PRM to these projects makes no sense [16]. There are two approaches to deal with unforeseeable uncertainty. The iterate-learn and the selection process approach. B. Risk Assessment The procedure of risk assessment consists of determining the probability of a possible event that might occur and the impact it would have on the business objectives (scope, cost, schedule and other). There are two methods to analyze risk: the qualitative and quantitative assessment techniques. They differ only in accuracy and expense for the implementation. The qualitative approach divides the choices of the likelihood and the possible impact or outcome into ranges. Afterwards these categories are named and specified. Finally each risk is assigned to one of the defined categories. The qualitative risk assessment is easier to apply than the quantitative proceeding, especially when there are only few categories to plump for. The purpose of risk assessment is to mitigate and manage project risks and qualitative techniques would be sufficient for this challenge [15]. But the quantitative methods have a greater precision, because they use numerical values or estimates. They allow calculating and accumulating risks for the assessment of the overall project risk that in turn is required for the evaluation of an IT project portfolio measured

by risk [15]. Therefore the quantitative approach is discussed in more detail. C. Risk Probability A probability could be a value between zero and one. The lower limit ”zero” stands for no chance of an event occurring and the upper limit ”one” stands for inevitable or unavoidable occurrence of an event [17]. In practice there are three options to determine a probability. With calculus of probabilities and a simple model demonstrated below it is possible to calculate the probability. The likelihood of throwing a six with a dice is the ratio of the number of cases favorable to the event to the total number of possible cases. Another method to predict a probability is based on empirical data that could be collected from former and similar projects. This method assures the best precision. Finally it is possible to assess a probability by guessing or estimating the value. Of course several tools support this approach. Tools like the Delphi technique, computer modeling and the help of knowledgeable experts like consults [15]. D. Qualitative Risk Assessment The main purpose of qualitative risk assessment is to prioritize a sequenced list of risks. Such a list could consist of five columns: rank, risk, description, root cause, triggers, potential responses, risk owner, probability, impact and the overall risk [18]. Triggers are indicators or symptoms of actual risk events, like a kind of early warning system. The assessment of the probability and the impact for each risk is based on categorization. For the likelihood and the impact you define for example three categories, such as ”low”, ”medium” and ”high”. The overall risk is the combination of the probability and impact choice, e.g. ”medium likelihood” / ”high impact”. To make the list sortable you could assign numbers to the categories: nine for high, three for medium and one for low. The overall risk is than the product of the probability and the impact number assigned. Traffic lights provide also a better overview. Risks at the top of the list need the most attention and risks at the bottom of the list deserve less consideration [15]. A single individual could assess the risks using such a risk assessment table but it is also possible to apply the Delphi method which collects the judgments or opinions of more than one expert and tries to reach a consensus on the issue. The Delphi technique is used in situations where uncertainty exists about the future or the data in the future, e.g. the duration of an activity in a PERT chart. ”It has been indicated in such studies that the Delphi technique is suitable to use when dealing with uncertainties in an area of imperfect knowledge” [19]. Later on an exemplary variant of the Delphi questioning is introduced. The Delphi Technique is used to derive a consensus among a panel of experts who make predictions about future developments. Provides independent and anonymous input regarding future events. Uses repeated rounds of questioning and written responses and avoids the biasing effects possible in oral

3841

methods, such as brainstorming [18]. Normally the ranges for the categories are equal and sometimes they are bounded geometrically to the scale, with small intervals at the low end and progressively larger intervals in the upper categories. Especially the risk impact assessment uses this category definition method [15]. Besides the risk assessment table there is another tool that demonstrates the identified risks in a matrix. On the one axis of the matrix you have the ascending categories for probability and against the other axis you plot the ascending categories for impact. It could be a two by two matrix or a five by five matrix due to finer gradations of impact and likelihood [15]. Risks located in the upper right corner of the matrix require constant monitoring and the greatest management attention throughout the project and risks in the lower left corner could be neglected. The risk assessment table and the matrix are useful tools to create clarity and overview. They are also applicable for quantitative risk assessment discussed in the following chapter. On the whole the qualitative risk assessment is a simple, with less effort linked and most frequently used approach. It is sufficient for prioritizing risks but not for numeric aggregating especially when there is correlations between the risk events. E. Quantitative Risk Assessment The assessment of risk impact can be a single, predictable value (an activity would last ten days) or it can be a continuum of possibilities (an activity could last eight, nine, ten or eleven days) [15]. The quantitative risk assessment approach uses probability distributions to represent such an uncertain variable, e.g. an uncertain cost of a project’s activity. Distribution shapes that are commonly continuous distributions used in project risk analysis are the normal distribution, the lognormal distribution, and the triangular distribution. The shape of a distribution is not essential for quantitative risk assessment, because a change of the shape has little effect on two parameters that are important for the risk analysis: the mean and the standard deviation of the distribution [15]. The mean is the value when the variable has a 50 percent chance of taking on a value that is greater and a 50 percent chance of taking a value that is lower. The standard deviation characterizes the range or dispersion and it is a measure of the breadth of values possible for the variable. The standard deviation reveals also the relative risk degree [20]. Mode, optimistic and pessimistic in PERT. Another display format of the probability distribution is the cumulative form. The x-axis is marked with the possible outcomes of an uncertain variable and the y-axis shows the associated probability values. Such an almost S-shaped curve indicates with what likelihood a specific value of the uncertain variable could be reached or overrun. A steeper curve progression of the cumulative distribution represents the normal distribution with low dispersion, small standard deviation and consequently a minor risk degree [21].

Expected monetary value (EMV) analysis is used in the decision-making process especially under risk and uncertainty. The concept of EMV calculates the average outcome for each investment or project to go for when the future includes scenarios that may or may not occur. In the majority of cases, there is no dominant project for all possible scenarios. In reality, higher profits are usually accompanied by higher risks and therefore higher probable losses. At decision-making under risk the probability for each scenario is known [22]. EMV is calculated by multiplying the value of each possible outcome by its probability of occurrence, and summing them together. The EMV of opportunities will generally be expressed as positive values, while those of risks will be negative. In mathematical formulation PP=

6P

i,t l pt

(1)

where PP is the EMV for project i, Pi,t is the payoff you get in case of scenario t, and pt is the probability of each scenario occurring [17]. Based on decision making under risk a risk-neutral decider chooses the project with the highest EMV. When two or more EMVs are equal a risk-seeking executive selects the project with the higher standard deviation and vice versa. The mathematical formulation shows how to calculate the standard deviation: n

Vi

¦(P

i ,t

 P i ) 2 u pt .

(2)

t 0

At decision making under uncertainty no probabilities are assigned to the scenarios. Under uncertainty there are several decision criterions to select a project among a set of projects available: Hurwicz-criterion (pessimisms-optimism-rule), Savage-Niehans-rule (rule with the smallest regret) and the Laplace-criterion. The decision criterion to choose depends on the type of the project and the preference of the decider. In [22] the criterions are discussed in detail. The purpose of the sensitivity analysis is to determine the extent of the effect on the outcome by changing one input parameter of a model. It could be a mathematical model, where the NVP is the outcome and the quantity of interest. When you change one input parameter it is important to keep the other input parameters constant due to get only the effect of the parameter changed [23]. The ratio of the change in some outcome to the change of some input is the derivative sensitivity coefficient [15]. A sensitivity analysis can be displayed using a tornado diagram, which illustrates the most sensitive variable along with the impact on the overall result [24]. To quantify the project’s overall risk (respectively the standard deviation) you need to aggregate all the single distributions in order to obtain a total distribution. The

3842

proceeding of the aggregation is dependent on the specific project constraint (time, cost or resources) to be considered. In case of time you have to determine the critical path through a network of activities. The critical path is the longest path through the network and the durations of each activity added up amount to the duration of the whole project [25]. For each activity it is possible to calculate the ”float” or ”slack” time. Activities not on the critical path do almost have a positive slack time that means the output of an activity could be generated before the time when it is actually needed. Required date to meet critical path minus scheduled completion date is equal to the slack time of an activity [22]. It is possible that a PERT chart has two or more critical paths having the same duration, especially when you try to consider the uncertainty in your analysis. Under such conditions the analysis are getting very complex. The problem of more than one critical path does not appear when you try to determine the total costs of the whole project. In case of cost you have just to aggregate the costs of all activities in your network plan [15]. PERT time and cost analysis, Monte Carlo simulations and an interesting technique from the University of Minnesota try to oppose the challenge of aggregation. III. NET PRESENT VALUE SCENARIO UNDER UNCERAINTY A mathematical model is designed and evaluated exemplarily. The model should be easily understood by all stakeholders and its calculated result serves as decision guidance for the decision makers. Disaggregation should be applied to obtain the model. It breaks the problem down into smallest manageable components and makes it more apparent. The model can have deterministic (i.e. not uncertain) input keys and uncertain parameters that are expressed as probability distributions. The distributions can be developed from historic data or derived from expert opinion [26]. Here the second approach is traced. A Delphi method is applied to elicit the expert opinion or estimates. These estimates are used as inputs for the three-point method that generates a probability distribution for each uncertain variable [27]. Through random sampling of these distributions it is possible to determine the distribution of all potential outcomes that could occur under these uncertainties. The model used in this section calculates the NPV for an IT project. The total net present value is a method to measure the project’s Return on Investment (ROI). The model of NPV considers all the payouts and all the incoming payments throughout the whole project’s life cycle. All these estimated project costs and returns emerging in the future must be discounted to the present and added. The sum represents the total present value for the project [15]. The equation (3) demonstrates how the NPV is calculated: N

NPV

¦ t 0

( I t  Ot ) (1  i ) t

(3)

where n is the number of periods, It is the incoming payment,

Ot is the outgoing payment for period t and i is the rate of interest [28]. Table 1 shows a cash flow for a project that lasts six years. TABLE 1 CASH FLOW AND NPV FOR A SIX PERIOD SCENARIO

t It Ot Cash Flow

NPV

0 I0 O0 Io-Oo

1 I1 O1 I1-O1

2 I2 O2 I2-O2

3 I3 O3 I3-O3 N

NPV

¦ t 0

4 I4 O4 I4-O4

5 I5 O5 I5-O5

( I t  Ot ) (1  i ) t

The parameters I and O for each period t are not deterministic and they should be represented by a probability distribution derived from the opinion of ten experts. Every expert delivers three estimates for each uncertain parameter: an optimistic (O), a most likely (ML) and a pessimistic (P) value. The Central Limit Theorem says that the mean and the standard deviation are the most important factors to specify a distribution and they depend comparably on the three estimates mentioned above. In some situations like the forecasting of costs the pessimistic and optimistic value are difficult to determine and misinterpretation could occur. To mitigate false estimates, the most likely value gets a stronger weighting than the optimistic and pessimistic value. Thereby the distribution is far more sensitive to the most likely value and correspondingly less sensitive to the optimistic and pessimistic value [26]. Some experts have more know-how and are more experienced than the other. So their opinions could be weighted stronger. In this example every expert has a stake of 10 % in the aggregated distribution. The three estimates get the following percentage weighting: O = 2,5%, ML = 5,0%, P =2,5%. Table 2 shows the expert opinion on the incoming payment for period two. The last row of the table represents the subjective and aggregated probabilities of occurrence (discrete distribution). Such inquiry is done for each uncertain parameter. TABLE 2 EXPERT OPINION ON THE INCOMING PAYMENT IN I2 Expert No. Incoming payment for period 2 (I2) 5800 5900 6000 6100 6200 6300 1 P ML O 2 P ML O 3 P ML O 4 P ML O 5 P ML 6 P ML O 7 P ML O 8 P ML O 9 P ML O 10 P ML O 6 in % 5.00 12.50 22.50 25.0 22.50 10.0

6400

O

2.50

The Further on, a sample with n = 1000 values of the NVP is simulated. By using the Monte Carlo method the sampling is

3843

assured to be representative. Since the estimated distribution consists of seven discrete values, an ordered sequence of triple-digit random numbers (from zero to 999) is subdivided into also seven sequences and then assigned to each discrete value of the distribution. The size of each sequence corresponds relatively to the size of the discrete value assigned. Table 3 demonstrates this approach. You can also display the table 3 as a step-function type chart in which for every random number there corresponds only one value from the discrete distribution [22]. TABLE 3 NPV SIMULATION USING MONTE CARLO Incoming payment for period 2 (I2) 5800 5900 6000 6100 6200 6300 5.00 12.50 22.50 25.0 22.50 10.0 001051176401651876050 175 400 650 875 975

6 in % assigned RN

6400 2.50 976000

A Random Number Generator (RNG) generates for each simulation n triple digit numbers, where n is the number of uncertain parameters. All the generated number with three digits have the same likelihood namely 1/1000 [21]. The calculation of the NVP for each simulation is done by a computer. The calculation example below demonstrates the calculation of the NVP for the first sample consisting of only four numbers for the following uncertain parameters: I0, I1, O0 and O1:

• • • • •

generated sequence by RNG: 805, 431, 230, 902 805 stands for I0 = 6200 431 stands for I1 = 3100 230 stands for O0 = 5100 902 stands for O1 = 2100

You can calculate the NVP by putting these values into the equation (3) designed for two periods and an interest rate of 5% (compare (4)). 2

NPV

( I t  Ot )

¦ (1  5%) t 0

t

(6200  5100)

(3100  2100)

(1  5%) 0

(1  5%)1

2052,38

(4)

The other 999 calculations are done by a computer. After the Monte Carlo simulation you get a resulting table. The first column contains all possible outcomes of the calculated NVP subdivided into ranges. In the second column there are the corresponding absolute frequencies. The third and the last column display the normal and the cumulated probabilities. A. Bayesian Approach Besides the popular Monte Carlo simulation there is a further, interesting technique introduced in [29]. This technique makes it possible to quantify uncertainty in project networks (ref project networks under uncertainty) without using complex and time consuming iterations. Because the

expenditure of time for Monte Carlo simulations is enormous, they can only be used in the planning phase and not during project execution. But this innovative technique is simple and fast, so it can be done in every phase of the project. Of course there are also some practical limitations, especially when you try to estimate the overall project duration for a very complex network containing dependencies between activities. Very interesting in [29] is the comparison between the Monte Carlo simulation and this new approach. It is known that the bigger the sample or the number of iterations is, used in Monte Carlo simulation; the more accurate is the result. Each approach applied to the same network generates a distribution. The resulting probability distributions from both techniques are very similar and it is recognizable that the distribution of the Monte Carlo simulation gets near to the other distribution by increasing the number of iterations. The principle of this new technique is to simplify a network till you have all activities combined in one single activity. The resulting discrete distribution representing the uncertainty in the duration for this activity stands also for the expected project’s overall duration. The process of simplification uses four operators: series, parallel, neglecting and replication operator. The last two operators are only used when the network is very complex due to dependencies between activities. In this summing up only the series and the parallel operator are introduced exemplarily. Given a network of activities you have to apply one of the operators on two activities to be merged. The position of these two activities determines the operator to apply. Two activities in series must use the series operator. All possible outcomes for the merged distribution must be determined by adding all of the possible combinations. For the graphical representation influence diagrams are a common technique. Such influence diagrams or causal Bayesian networks are graphical models to represent knowledge under conditions of uncertainty [10]. They has been used in many fields like logistic applications [2], expert systems [3] or classification systems as powerful tools for the knowledge representation and inference under uncertainty. Bayesian networks and the use of such probabilistic models is based on direct acyclic graphs (DAG) with a probability table for each node. The nodes = in a Bayesian network represent propositional variables in a domain, the edges , between the nodes represent the dependency relationship among the variables. Each node has a conditional probability table P(X | X1,…,Xn) attached that quantifies the effects that the parents X1,…,Xn have on the node. We could say that the conditional probabilities encode the strength of dependencies among the variables. For eachҏҏ a conditional probability distribution is defined that specifies the probabilities of = given the values of the parents of X. A decision maker makes decisions by combining his own knowledge, experience and intuition with that available from other sources. Given a learned network structure [8] like Bayesian networks the decision maker can implement additional information in applying an inference

3844

algorithm. We apply the learned Bayesian network to calculate new probabilities when particular information is achieved [11]. For instance let A have n states with P(A) = (x1, ..., xn) and assume that we get the information e that A can only be in state i or j. This statement expresses that all states except i and j are impossible, so next we can illustrate the probability distribution as P(A, e) = (0,...,0, xi, 0,...,0, xj ,0, ...,0). Assume a joint probability table P(U) where e is the preceding finding (n-dimensional table of zeros and ones). Using the chain rule for Bayesian networks [5] we can express the following P(U, e) = 3 A H U P(A | parents(A)) · 3 i ei.

the better exploitation of synergy effect. IT-PPM is to also make possible the different project investments comparable, by establishing clear and formal decision making rules and of processes.

(5) Fig. 1. IT Project Portfolio scenario flow from acquisition to project competion [22]

B. Decision Networks for IT Project Portfolios Influence diagrams or known as decision network representations can be considered as extensions of Bayesian networks. Apart from chance nodes, a decision network contains two additional types of nodes, namely utility nodes and action nodes. Subsequently, we restrict ourselves to one utility node and one action node. (Several action nodes are needed for sequential decision problems.) The utility node U is associated with a utility function U : pa(U) ĺ9, where pa(U) denotes the parents of U (a utility node does not have any children). The action node is associated with the set A of possible actions. The parents of this node are the so-called evidence variables or information variables: These variables are known to (can be observed by) the decision maker before acting [12]. The relevance of such networks for IT project portfolios were discussed now. IT-PPM represents has holistic strategic management approach. This means that the various functions carried out by IT-PPM can have a strategic influence on the company and that these involve at the same time several management discipline such as for example resource management, IT-controlling, the strategic planning, communication management or project management. The project portfolio as the whole of the projects of the company implies that one of the essential functions of the IT-PPM will be to provide all information as well ex ante and ex post project and define standard processes for a fair selection and priorisation of the projects. The purpose here is to eliminate or reduce investment in poorly defined, managed or low- value projects. The project portfolio is according to the figure 1 at the acuteness; this states simply that compared to project management or program management, the IT-PPM conducted at the highest strategic level. Thus, an another functions of the IT-PPM is to ensure e that suite of the projects furthers the goals of the corporate strategy, i.e. to make sure that the IT-projects carried out in the company correspond to what the business requires in other to deliver has higher benefit. Another function of the IT-PPM is to create the transparency in the projects landscape of the company. This can in one side prevent the project redundancies or overlap especially across business units and/ or region, and in the another side supports

These meant simply that one defines uniform criteria, on the basis whose two different projects can be confronted. Additionally, IT-PPM must allow planners to schedule resources more efficiently by applying to the highest priority projects. So the IT organization must have an overview of all potential and current projects and ensure that resources can be distributed in such a way that each project is successfully terminated. Resources bottleneck are to be foreseen. From all these functionalities, it comes in short that the goal of the IT-PPM is in one side to make sure that each project carry out in the company will particularly bring to the customer the expected quality and benefits. In the other side, IT-PPM must assure that the resources are directed only to those projects, who meet the company’s strategic goals. If these functionalities are well filled, the IT-PPM could represent an effective instrument to take competitive advantage and to increase the total value the company while minimizing the risks. The table below shows a business case for IT-PPM best practice and shows that before the benefit can be materialize in the financial aspect, the le IT-PPM also improve the social and technical part of an organisation. Figure 2 shows a decision network scenario representing an integrated chain of business processes tightly linked the employees within a company. Basic principles are first simplification to reduce risks. Another point is standardization to increase the flexibility. With modularity, companies can use only what they need only when they need it and finally integration to make systems coherent and easy to manage, modify and change.

Fig. 2. Decision network supports company strategy and architecture services

3845

In order to compute the expected utility of an action a H A, the action node is instantiated with that action. Moreover, the evidence variables are instantiated with the corresponding observations. Then, algorithms for Bayesian networks are used in order to compute a probability distribution over the random variables which are parents of the utility node. The expected utility EU(a) can then be derived on the basis of this distribution. After having performed this procedure for all a H A, an optimal action is chosen according to the maximum expected utility criterion.

different points of the quantification and qualification framework into practice. Our goal is to develop a general method for measuring project portfolio risks. Apart from the points mentioned in the paper, we are also investigating some further extensions. For example, in order to further reduce the complexity of the underlying architecture structure, a kind of feature selection suggests itself: Only the most important evidence variables are selected for decisions, whereas the unimportant ones are ignored. REFERENCES [1]

IV. FUZZY RULE BASES Consider a set of variables Xı with domains DXı (1” ı”n) and a variable Y with domain +Y. Moreover, let Fı be a fuzzy partition of DXı , that is a finite set of fuzzy subsets F H -(+Xı) such that ÈFHFı F(x) > 0 for all x H +Xı . Likewise, let F be a fuzzy partition of +Y . A fuzzy rule base 9 is a finite set of fuzzy rules of the form “If X1 is in F1 and X2 is in F2 and . . . and Xn is in Fn then Y is in F”, can formally express and written as . There are different types of fuzzy inference schemes. Formally, an inference scheme identifies a fuzzy rule base 9 with a function

[2]

[3] [4] [5] [6] [7] [8]

ijR : DX1 × . . .DXn ĺ -(+Y ),

(6)

[9]

where -(+Y ) is the class of fuzzy subsets of +Y. If a defuzzification operator -(+Y ) ĺ +Y is applied to the output of this function, the fuzzy rule base 9 induces a function ijR : +X1 × . . . × +Xn ĺ +Y .

[10] [11] [12]

(7)

Here, we do neither stick to a particular inference scheme nor to a special defuzzification operator. The important point to realize is simply the following: Once an inference scheme and a defuzzification operator have been determined, each fuzzy rule base 9 can be associated with a function (7).

[13] [14]

V. CONCLUSION

[17]

We have outlined a qualitative and quantitative IT project portfolio approach towards a value based alignment to risk management. The purpose is the management of the risks throughout the lifecycle of IT-PPM. Based on a theoretical statistical fundament single projects and it’s risks were classified and described how uncertainty and risk are measured, analyzed and cumulated. The most attention is paid to quantification of risk that is based on decision theory. IT-PPM’s includes risk as measurement from the business strategy to operational stages. Unknown uns in projects are measurable with uncertainty values and visualized through bayesian structures. Currently, we are about to put the

[18]

[15] [16]

[19] [20] [21] [22] [23] [24]

3846

G. F. Cooper, “The computational complexity of probabilistic inference using bayesian belief networks,” in Artificial Intelligence, Vol. 42, 1990, pp. 393–405. A. Holland, “A bayesian approach to model uncertainty in network balanced scorecards (Published Conference Proceedings style),” in Proc. 10th Intern. Conf. on Soft Computing, Mendel 2004, Brno, Czech Republic, 2004, pp. 134–138. S. Russell and P. Norvig, Artificial Intelligence. A Modern Approach. New Jersey: Prentice Hall, 2003, ch. 14. C. Borgelt and R. Kruse, Graphical Models. Methods for Data Analysis and Mining. New York: John Wiley & Sons, 2002, ch. 4. J. Pearl, Probabilistic Reasoning in Intelligent Systems. San Francisco: Morgan Kaufmann, 1988. F.V. Jensen, Bayesian Networks and Decision Graphs. Statistics for Engineering and Information Science, Berlin Heidelberg New York: Springer-Verlag, 2001. R.J. Brachman and H.J. Levesque, Knowledge Representation and Reasoning. San Francisco: Morgan Kaufmann, 2004, ch. 12. R.E. Neapolitan, Learning Bayesian Networks. New Jersey: Prentice Hall, 2004. A. Holland and M. Fathi, “Concurrent fusion via sampling and lin-op aggregation. (Accepted, Conference Proceedings style),” in Proc. Modeling Decisions for Artificial Intelligence, MDAI 2006, Tarragona, Spain, April 2006. J. Pearl, Causality: Models, Reasoning and Inference. Cambridge: Cambridge University Press, 2000, ch. 1-3. K.P. Murphy, “Active learning of causal bayes net structure (Report style),” Technical Report, UC Berkeley, 2001. N. Friedman, K.P. Murphy, and S. Russell, “Learning the structure of dynamic probabilistic networks (Published Conference Proceedings style),” in Proc. 14th Intern. Conf. on Uncertainty in Artificial Intelligence, UAI ‘98, Madison, Wisconsin, USA, 1998, pp. 139–147. J.G. March, Decisions and Organizations. Blackwell, 1988. Committee for Oversight and Assessment of U.S., The Owner’s Role in Project Risk Management. National Academic Press, 2005. T. Kendrick, Identifying and Managing Project Risk. Amacom Management Association, 2004. C.H. Loch, A. deMeyer, and M.T. Pich, Managing the Unknown. A new Approach to Managing High Uncertainty and Risk in Projects. John Wiley & Sons, 2006. R.J. Chapman, Simple Tools and Techniques for Enterprise Risk Management. John Wiley & Sons, 2006. K. Schwalbe, Information Technology Project Management. 4th Edition, Course Technology, 2005. M. Haeder, Delphi Befragungen, Westdeutscher Verlag, 2002. J.T. Marchewka, Information Technology Project Management. John Wiley & Sons, 2006. K. Wolf and B. Runzheimer, Risikomanagement und KonTrag. Gabler Verlag, 2001. H. Kerzner, Project Management: A Systems Approach to Planning, Scheduling, and Controlling. 8th Edition, John Wiley & Sons, 2003. M.W. Newell, Preparing for the Project Management Professional. American Management Association, 2005. D.F. Cooper, S. Grey, G. Raymond, and P. Walker, Project Risk Management. Managing Risk in Large Projects and Complex Procurements. John Wiley & Sons, 2005.

[25] Project Management Institute, A Guide to the Project Management Body of Knowledge. 3rd Edition, PMI, 2004. [26] D. Vose, Fundamentals of Risk Analysis and Risk Management. Lewis, 1997. [27] R.T. Clemen, Making Hard Decisions: An Introduction to Decision Analysis. Duxbury Press, 1997. [28] S.L. Baker, Perils of the Internal Rate of Return. Economics Interactive Lecture, 2000. [29] R.G. Rosandich, S. Erquicia, Quantification of Uncertainty in Transportation Infrastructure Projects. Project Report, Center for Transportation Studies, University Of Minnesota, USA, 2005. [30] A. Holland and M. Fathi, “Analysis and Transformation of Graphical Models,” in Int. Journal of Computational Intelligence, Vol. 1(1), IJCITP 2006, pp. 1–8.

3847

Quantitative and Qualitative Risk in IT Portfolio Management

Abstract - The key point of this paper is the proactive management of the whole risk of an IT project portfolio. A portfolio is collection of projects and every project ...

323KB Sizes 0 Downloads 167 Views

Recommend Documents

Qualitative and Quantitative Identification of DNA Methylation ...
Qualitative and Quantitative Identification of DNA Methylation Changes in Blood of the Breast Cancer patients.pdf. Qualitative and Quantitative Identification of ...

Educational Research: Quantitative, Qualitative, and ...
technology-based-that are used in education today. ... judging validity, experimental and non-experimental methods, descriptive and inferential statistics,.