Dynamic Decision Support for Emergency Responders Jill L. Drury*, Gary Klein**, Mark S. Pfaff#, and Loretta D. More## The MITRE Corporation, *Bedford, MA, and **McLean, VA, USA, {jldrury, gklein}@mitre.org # Indiana University Indianapolis, Indianapolis, IN, USA, [email protected] ## Pennsylvania State University, University Park, PA, USA, [email protected] Abstract—To enhance support of emergency response decision making, we are investigating decision aids that use simulation models to predict the range of plausible consequences of each potential course of action. Due to the rapid pace of emergency operations, the user interface displaying the models’ results needs to facilitate quick understanding of a changing landscape of possible futures. This paper describes a user test aimed at (1) determining whether a model-driven visualization was useful and (2) confirming the validity of a principled design approach for developing decision-making test situations. Two groups of participants received identical textual descriptions of situations and were asked to decide the number of emergency response vehicles to dispatch. One group also saw a visual depiction of model results. The decision-aided group’s decisions were closer to the normatively correct decisions and this group’s confidence was higher. Further, the decision-aided group rated the degree of decision support higher. The decision times for each of the four types of test situations differed significantly from each other, revalidating our method of developing test situations.

INTRODUCTION Because emergency response is so time-critical, decisions need to be made correctly and quickly if they are to have the biggest chance of saving lives and property. As recent wildfire and hurricane challenges have made abundantly clear, these decisions can be difficult to make against a backdrop of the many complex and dynamic factors that are beyond decision-makers’ control. Modeling to support decision-making has long been used for strategic planning by disaster responders, but using models effectively in a tactical, volatile environment can be challenging because of models’ sensitivity to input assumptions. In the quest to develop the optimal solution, most models require detailed inputs that involve a number of assumptions and predictions, which may or may not

come true. This uncertainty about these inputs, assumptions, and predictions can yield “brittle” optimal courses of action (COA): results that point towards one course of action (COA) may indicate another COA if inputs turn out to be invalid based on a changing environment. Even if the environment doesn’t change, the approximations that are inherent in any human description of a situation can lead to mistaken decisions because even precise simulations are still incomplete models of reality. There inevitably exists a gap between the description of a situation and the precise, accurate information needed to make optimal decisions. Hall et al. (2007) call this the Situation Space-Decision Space gap. Decision-makers must make sense of the descriptive information in the “situation space” such as from surveillance, sensors and alerts; this is situation awareness. They must then map this information into a “decision space” of the set of alternative COAs and their plausible consequences; this is option awareness. An optimal COA is always extremely situation-dependent and is often sensitive to conditions beyond decisionmakers’ control. Under deep uncertainty (Lempert et al. 2003), decision-makers would be better served by aids for making robust decisions that are less sensitive to inaccuracies in situation descriptions, and therefore more likely to succeed across a wider range of plausible futures. Lempert et al. (2003) and Chandrasekaran (2007) describe general methods for identifying robust COAs by using simulation models that determine the consequences of each COA under a wide range of plausible futures. This robust decision-making (RDM) approach will require novel visualizations to provide users with an option awareness of the range of simulated consequences of possible decisions. Consider a fire chief who must decide how many assets to deploy to a single fire. If a warm wind might whip the flames in the direction of the local chemical plant, the chief might make a very different decision than if a drenching downpour were to occur. The chief might want to visualize the potential for weather extremes and their effects on allocating two versus three ladder trucks to the scene. A graphical display based on weather simulation could make it easy to comprehend the results, using a relative cost metric, of range of outcomes that could happen

visualization of distributions that typical research subjects can be readily trained to read.

The purpose of this study was two-fold. Primarily, we wished to test the effect on decision making of providing a visualization of the decision space. Secondarily, we aimed to replicate previous results (Drury et al., 2009a) regarding the design of complicated decision-making test situations based on the principles of RDM.

We used a simplified box-plot visualization (with no outlier data points) in this experiment because our intended audience of emergency responders will likely not have previous knowledge of sophisticated statistical concepts. In the visualizations (see Figure 1), each box-plot shows the lowest cost (0th percentile) for the best possible future as a lower whisker, the 25th percentile cost as the bottom of the box, the median cost as a red line within the box, the 75th percentile cost as the top of the box, and the highest cost (100th percentile) as the top whisker.

VISUALIZING COSTS Although our work will be applicable to many domains in which decisions are made, our focus area was emergency response. To compare the relative desirability of alternative COAs, we developed a cost metric that takes into account both immediate and future costs. Immediate costs include the cost to send resources, the current magnitude of the emergency, property damage that might occur in the course of the response, and a dollar value assigned to both the injuries and loss of life that might occur (known as potential casualties). However, one must also consider future costs in the current decision because if too many resources are allocated to the current emergency, there may not be enough to handle a future emergency, resulting in additional losses. The costing of current COAs should thus include some estimate of future damage, injuries, or loss of life that might be made more extensive because of the resources sent to the immediate event. Our cost metric is, in fact, a multi-attributed utility (MAU) function (Chatfield et al., 1978; Keeney and Raiffa, 1993). Holloway (1979, p11) notes that, “A major difficulty in making decisions under uncertainty is that ‘good’ decisions can have ‘poor’ outcomes and vice versa.” The problem with previous MAU representations is that the distributions of these good and poor outcomes and their values are collapsed into a single probability-weighted average for each option. Decision-makers find it difficult to accept and interpret meaningfully this kind of MAU representation. This difficulty is one reason that MAU has been so underutilized in the field by decision-makers under uncertainty (Klein, 1982). In contrast, the RDM approach maintains additional distribution information and presents it visually to the user. RDM not only shows the median MAU value of each option, but also shows the distribution of MAU values that occur in multiple plausible futures. Based on Tukey’s (1977) box-plots, we developed a statistical box-plot visualization of the costs of the COAs facing an emergency response decision maker. We are aware of the universe of possible data visualizations, such as the hundreds of examples that can be seen at www.visualcomplexity.com or the “periodic table of visualizations” at visual-literacy.org. Box-plots were selected for this experiment because they are a common

Cost Likelihood Decision Will be Regretted

when allocating two versus three ladder trucks under a range of plausible conditions.

1

2 Fire Trucks

Figure 1—Box-plot visualization showing the relative costs of sending one fire truck versus two assuming a range of possible futures.

Option awareness comes from comparing the relative costs among all of the potential COAs. In each vignette, participants could choose between sending 0, 1, 2, 3, 4, or 5 emergency response vehicles (police squad cars, ambulances, or a fictional type of all-purpose fire truck). Therefore, participants saw six box-plots, one for each of the six quantities they could allocate, with the y-axis representing cost and the x-axis representing the quantity of emergency response vehicles to allocate. The combination of the six box-plots is akin to Tufte’s “small multiples” (Tufte, 1990), in which he shows a series of simple graphics that differ in only a few key ways. We are mindful of the possibility that some situations will have dozens or hundreds of possible COAs instead of only the handful that we assume here. Thus any visualization approach should be scalable to more than a few COAs. While it is difficult to imagine viewing hundreds of boxplots simultaneously, there have been small-scale variants of box-plots, such as one that arranges data along radial lines originating from the center of a circle (Jayaraman and North, 2002). Hundreds of slender box-plots could be

displayed using a combination of Jayaraman and North’s visualization approach and a fisheye lens (Furnas, 1986) to allow for users to zero in on potential choices. Thus investigation of a simple box-plot visualization approach should scale well and generalize to other similar displays.

We used the Principled Ambiguity Method of test case construction (Drury et al., 2009a) to ensure that each ambiguous vignette contained carefully selected conflicts of known types. Each participant was asked to make decisions in ten vignettes of each type (presentation order was randomized).

METHODOLOGY Experiment Conduct Experiment Design The experiment employed a mixed design. Between subjects: One group of participants received only situation space information (the SS group), a textual description of an emergency event. The other (the DS group) received both the situation space and decision space information, a box-plot visualization. Within subjects: Each participant, regardless of group, was asked to make decisions in unambiguous and ambiguous vignettes: 1.

Unambiguous. There was clearly one COA that was better than the others in all dimensions.

2.

Ambiguous, robustness conflict of best-case vs. median case vs. worst case costs. One COA had a higher median level of cost and/or a higher 0th or 25th percentile cost than the other COAs, but it also had a lower 75th or 100th percentile cost. (Figure 1 illustrates this case.) This means that this COA would almost certainly result in a higher cost than the other COAs under the best circumstances, thus making it less attractive on a best-case basis. However, it would have much less probability of catastrophic results than the other COAs, thus making it the preferred choice on a worst-case basis.

3.

4.

Ambiguous, cost function conflict of incident magnitude versus cost. All things being equal, smaller incidents normally require a smaller number of resources, and larger incidents should require a larger number of resources. But things are not always equal. For example, a small trash can fire in a major art gallery could result in millions of dollars in art restoration. Accordingly, more resources should be expended than would ordinarily be the case. Similarly, picture a large forest fire in the middle of a wilderness area in March. In this case, few (if any) resources should be expended. Ambiguous, cost function conflict of current cost versus future cost. These vignettes indicate that another emergency was imminent. There is a tension between using many resources to rectify the current emergency quickly, versus using fewer resources to combat the emergency more slowly but save resources to respond to the next emergency.

Participants were assigned to the SS group or the DS group randomly. All participants were asked to read a paper copy of a one-page introduction to the experiment, which included Institutional Review Board (IRB) information. They were then given a paper copy of a training manual to read and keep as a reference during the experiment as well as a paper copy of Frequently Asked Questions (FAQ). We ensured that the training materials for both groups provided information about judging relative costs of the alternatives. Next, they were given ten training vignettes in the computerized testbed so that participants could become familiar with its interface and the types of decisions they were being asked to make. After the training, they completed 40 vignettes on the computer testbed during which participants were asked to play the roles of police or fire/rescue commanders. Each vignette contained a short textual situation-space description of the emergency that included information that suggested the likely cost of the incident (e.g., a fire at a jewelry shop) and the likelihood of another incident occurring soon. For the DS group there was also a box-plot diagram of the decision space comparing the COAs. While box-plots were created using a computational model, some were adjusted to accentuate ambiguity. Each vignette was completely independent; what happened in one vignette did not affect another vignette and the number of resources available was reset to the maximum at each new vignette. After reading the textual description, each participant was asked to estimate three parameters: the current magnitude of the emergency incident (via a semantic differential scale implemented as a slider from a low value of 0 to a high value of 7), the likely property damage that could result (radio buttons indicating low, medium or high), and the potential casualties (also low, medium, or high). After setting each parameter, participants were asked to rate their confidence in these estimations using a semantic differential scale from a low of 0 to a high of 7. On the next screen, participants were again shown the textual description (and the box-plots if they were a member of the decision-aided group) and were asked to make a decision regarding the number of resources to send (0 to 5). Immediately after each decision, participants were asked to rate their confidence in that decision on a semantic differential scale (from a low of 0 to a high of 7). All participants were asked two final questions during each vignette: How much does this decision impact your ability

to deal with future situations? (Radio buttons indicated the possible answers of low, medium or high.) What is the likelihood of future situations occurring? (Possible answers were “less than usual,” “same as usual,” and “more than usual.”) Figure 2 illustrates this screen for the DS group. During the training vignettes, participants were given feedback. After they entered their estimates for the three parameters (current magnitude, property damage, and potential casualties), they were provided with actual values used in the computational model. After they chose their resource option, they were given the correct number of resources to send based on the model. Because the testbed was instrumented to capture the amount of time spent making the decision of how many resources to allocate during the test vignettes, the training was important to ensure that participants were comfortable with the interface and the questions being asked of them. After completing all of the vignettes, participants answered survey questions, including questions probing their subjective assessment of the decision support provided to them.

Participants We recruited a total of 41 participants from a not-for-profit corporation and a university. Thirty-five participants worked at the not-for-profit corporation, of which 13 were female and 22 were male. Of these 35 participants, 14 had previous experience in emergency response. Two were 21 – 25 years old, three were 26 – 30, five were 31 – 35, two were 36 – 40, three were 41 – 45, three were 46 – 50, five were 51 – 55, six were 56 – 60, five were 60 – 65, and one was 66 or older. The six participants at the university were all male and two had previous emergency response experience. One of the university participants was between 18 and 20 years of age and the other five were 21 – 25. Mitigating Confounders As part of the post-test survey, we used three different survey instruments to determine where participants fell on the spectra of risk taking versus risk aversion (Blaise and Weber, 2006), visual versus verbal information processing (Childers et al., 1985), and vivid versus non-vivid imaging (Sheehan, 1967). We also surveyed participants’ prior experience with emergency response and with box-plots. We ensured that participants in both groups were trained on the same concepts despite the different information

Figure 2—Screen capture of DS group’s final screen for a typical vignette.

presentation. Finally, as stated previously, we randomized the assignment of participants to conditions and also randomized the order in which we presented vignettes to participants.

hypothesize that they will want to conserve resources to address that possible future event.

HYPOTHESES

H1 (the DS group will make decisions that will result in more positive outcomes) was supported. The DS group made the correct resource allocation 68% of the time, compared to 40% in the SS group (Χ2(1) = 122.99, p < .001). Based on the odds ratio, individuals with the decision support were 3.15 times more likely to get the correct answer than those without.

We defined six a priori hypotheses. H1: The DS group will make decisions that will result in more positive outcomes than the SS group. H1 rationale: Since the box-plots take into account a number of factors that affect decision outcome, to the degree that they make the outcome quality clear, participants will be able to choose a response that will lead to a better outcome (that is, they will choose a decision that is closer to the normatively correct COA). H2: The DS group will be more confident in their decisions than the SS group. H2 rationale: The box-plots will make it clearer which COAs will lead to more positive outcomes, so to the extent that this clarity is communicated to participants, they will be more confident in their decisions. H3: Non-ambiguous decisions will be made faster than ambiguous decisions by participants in both groups.

RESULTS

H2 (the DS group will be more confident in their decisions) was also supported. A one-way Kruskal-Wallis test showed that participants with the decision aid reported much higher confidence (M = 5.41) than those without (M = 4.95), H(1) = 24.11, p < .001. A subsequent Kruskal-Wallis test showed that participants with the DS also reported much higher confidence for non-ambiguous vignettes (M = 5.55) over the other three types (M = 5.07, 5.11, and 4.99), H(3) = 34.12, p < .001. H3 (non-ambiguous decisions will be made faster) was supported, as well. A mixed factorial ANOVA for decision time (R2 = .48) showed that decisions about non-ambiguous vignettes were made in the fastest time (M = 41.21 sec.) of all four types (all means differ significantly at α = .05). Figure 3 illustrates the four decision times.

H3 rationale: Non-ambiguous decisions should be easier to make than ambiguous decisions, and therefore all participants should make them faster. H4: In the case of ambiguous decisions, the DS group will take longer to make the decisions. H4 rationale: When the decision is ambiguous, the decision aid will highlight the ambiguity and cause a decision-maker to stop and deliberate more than they would otherwise. H5: The DS group will give significantly higher scores for the degree of decision support provided to them. H5 rationale: The box-plots should help in determining which COA is likely to be best. The DS group should be more likely to feel that they have received some support in their decision making than the SS group, which only receives situation information. H6: Participants in both groups who believe there is a large chance of future events occurring will under-allocate resources. H6 rationale: Some of the vignettes include hints that another emergency is highly likely to happen in the near future. If participants use those hints to develop a belief that another emergency event is likely to happen soon, we

Figure 3—Time to make decisions in four different types of vignettes.

H4 (the DS group will take longer to make ambiguous decisions) was not supported. A mixed ANOVA showed no significant difference in decision time with respect to condition for any type of vignette, whether unambiguous or ambiguous (p = .30). H5 (the DS group will give significantly higher scores for the degree of decision support provided to them) was supported. The DS group rated the system as more highly supportive (M = 5.3) than the SS group (M = 4.5), t(38) = 2.14, p < .05. H6 (participants who believe there is a large chance of future events occurring will under-allocate resources) was not supported. A one-way within-subjects ANOVA (R2 = .09) found that those reporting a “more than usual” chance of future events occurring over-allocated by .10 resources, while those answering “same as usual” or “less than usual” under-allocated by .11 and .09 resources, respectively. No significant effects were found for covariance of any of the following: 

Prior experience with box-plots



Emergency response experience



Vivid vs. non-vivid imaging ability



Risk taking versus risk aversion



Visual versus verbal information processing preference

DISCUSSION This experiment replicates and extends an earlier experiment by the authors reported on in (Drury et al., 2009a) and (Drury et al., 2009b). The differences between the current and earlier experiments are as follows. 

The earlier experiment did not require participants to set the current magnitude, property damage, and potential casualty parameters for each vignette. We forced this interaction as part of the second experiment to ensure participants’ attention to these parameters (plus we plan to analyze whether they set parameters accurately as part of future experimentation).



Unlike the second experiment, the first experiment did not ask participants to estimate possible impacts on future events and likelihood of those events occurring. We wished to see if participants allocated resources differently based on anticipating the need to respond to another emergency in the near future.



The second experiment involved substantial refinements to training materials to better prepare participants.



More participants took part in the second experiment (35) than in the first (21).

The results in this latest experiment were consistent with the previous experiment with some exception, as discussed below. The experiments demonstrated the utility of visualizing robustness. In both experiments, the decision space information did significantly and positively impact the decisions that were made by the DS group. Participants did not appear to have difficulty in understanding or making use of the box-plots. In the second experiment the DS group members had significantly more confidence in their decisions than the SS group members, whereas we did not measure a statistically significant difference in confidence during the first experiment. We believe the improved training materials are the likely cause of the increase in confidence. However, it is also possible that the entry of their own estimates of the three input parameters gave the participants a sense of interaction with the model, which increased their confidence (Williamson and Shneiderman, 1992). This experiment also revalidated the Principled Ambiguity Method of constructing test situations. We created the ambiguous cases by exploiting the tension among the terms in the cost equation. As in the first experiment, participants in the second experiment had less confidence and took a longer time to make decisions in the non-ambiguous cases. Moreover, the times taken for decisions of each type of ambiguity were statistically different from each other, thus confirming that our Principled Ambiguity Method successfully introduced decision-making conflict based on decision space trade-offs. This result adds a new perspective on why people have difficulty making decisions in some situations. In particular, the trade-off between robustness and median cost only becomes apparent in the decision space. It was therefore unsurprising that the boxplot visualization, which makes this particular tradeoff vary apparent, had a positive effect on confidence in the second experiment. At first we were surprised by the results of H6 because we fully expected that the participants who believed there was a high chance of emergencies occurring in the near future would tend to under-allocate resources for the current emergency. Upon further thought, we realized that the vignettes that hinted of future emergencies usually involved a serious current emergency, due to our efforts to ensure a pronounced conflict between the desire to send adequate resources to the situation at hand while still saving resources for the future. We now believe that the relatively large magnitude of the current vignette tended to focus participants on the need to send high numbers of resources immediately.

FUTURE WORK Our next task consists of running two additional experimental conditions that are identical to those reported on in this paper except that participants will not be asked to estimate the three parameters of current magnitude, property damage, and likely casualties. We will compare the results of these conditions to the results in this paper to see whether estimating the parameters makes a difference in how participants decide to allocate resources, and their confidence in their choices. Note that estimating input parameters in this series of experiments does not, in fact, affect what the model uses in its calculations of costs for COAs. We are considering future experiments that will allow participants actually to interact with the input parameters via sliders, such as have been used in interfaces designed by Shneiderman (e.g., Williamson and Shneiderman, 1992), to affect how the model assigns costs to the alternative COAs. We have already begun developing prototypes that would allow users to drill down into the output of the model to see the details beneath each box-plot visualization. In doing so, users can determine the characteristics of the cases that lead to especially high (or low) costs for each COA, and make judgments regarding whether they feel these cases are particularly likely to occur. Further, we would like to investigate different visualization approaches and combine visualizations of the decision space with key aspects of the situation space, such as providing ways to visualize COAs in the context of a map. It may be possible that a combined situation space/decision space approach would enable decision-makers to pick out patterns that they would not otherwise see. We have also planned experiments that determine whether there is a psychological impact of differences in COA visualizations that occur due to changes in the granularities of the underlying models. There is a high pay-off in terms of response time for providing decision-makers with information based on the most coarse-grained model that will still support accurate decision-making. Finally, we plan to investigate RDM approaches for the very rich area of collaborative decision-making. There are many questions to answer, such as whether a team of decision-makers would be better served by seeing each other’s decision space information or whether they should view a single, joint decision space.

ACKNOWLEDGMENTS This work was supported by The MITRE Corporation Innovation Program project 19MSR062-AA and 45MSR019-AA. We wish to thank Elaine Bochniewicz of The MITRE Corporation for help in running the experiment. Thanks are also due to Nathan Rackliffe of The MITRE Corporation for his work on this project

building prototypes that make use of these concepts as well as his presentation of this work to the HST ‘09 conference. Further, we wish to thank Robert Hooper of PSU for help with vignette and testbed development. Finally, thanks to David Hall and Jacob Graham of PSU for acting as sounding boards for ideas discussed in this project and for information systems support to conduct the experiment.

REFERENCES 1. Blais, A., & Weber, E. A domain-specific risk-taking (DOSPERT) scale for adult populations. Judgment and Decision Making, 1(1), 33-47. 2006. 2. Chandrasekaran, B., From Optimal to Robust COAs: Challenges in Providing Integrated Decision Support for Simulation-Based COA Planning, Laboratory for AI Research, The Ohio State University, 2005. 3. Chatfield, D. C., Klein, G. L., Copeland, M. G., Gidcumb, C. F., & Schafer, C., Multiattribute utility theory and conjoint measurement techniques for air systems evaluations. Technical Report, Pacific Missile Test Center, N6126-78-1998, 1978. 4. Childers, T. L., Houston, M. J., & Heckler, S. E. Measurement of individual differences in visual versus verbal information processing. The Journal of Consumer Research, 12(2), 125-134., 1985. 5. Drury, J. L., Pfaff, M., More, L., and Klein, G. L. A principled method of scenario design for testing emergency response decision-making. In Proc. Of the 2009 International Conference on Information Systems for Crisis Response and Management (ISCRAM 2009), Goteborg, Sweden, May 2009a. 6. Drury, J. L., Klein, G. L., Pfaff, M., and More, L. Dynamic data visualizations for decision support. In Proceedings of the Human Interaction with Intelligent and Networked Systems Workshop at the Intelligent User Interfaces 2009 Conference, Sanibel Island, Florida, February 2009b, available at http://www.cs.bath.ac.uk/hiins/acceptedpapers.html. 7. Furnas, G. W. Generalized Fisheye Views. Proc. of the Conference on Human Factors in Computing Systems (CHI'86), ACM Press, 1986. 8. Keeney, R.L. and Raiffa, H. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York, New York: Cambridge University Press, 1993. 9. Jayaraman, S. and North, C. A radial focus+context visualization for multi-dimensional functions. Proc. of the Conference on Information Visualization, Boston, IEEE Computer Society, 2002.

10. Klein, G. L. (1982). Heuristic influence on decision processes. PhD. Thesis, Department of Psychology, Texas Tech University. 11. Hall, D. L., Hellar, B. and McNeese, M. (2007). Rethinking the Data Overload Problem: Closing the Gap between Situation Assessment and Decision Making, Proc. of the 2007 National Symposium on Sensor and Data Fusion (NSSDF) Military Sensing Symposia (MSS), McLean, VA. 12. Holloway, C.A. (1979). Decision Making Under Uncertainty: models and choices. Englewood cliffs, New Jersey: Prentice Hall. 13. Lempert, Robert J., Steven W. Popper and Steven C. Bankes, Shaping the Next One Hundred Years: New

Methods for Quantitative, Long-Term Policy Analysis, Santa Monica, Calif., RAND MR-1626, 2003. 14. Sheehan, P. A shortened form of Betts' questionnaire upon mental imagery. Journal of Clinical Psychology, 23(3), 386-389, 1967. 15. Tufte, E. R. Envisioning Information. Cheshire, CT: Graphics Press, 1990. 16. Tukey, J. W. Exploratory Data Analysis. Mass: Addison-Wesley, 1977.

Reading,

17. Willamson, C. and Shneiderman, B. The dynamic HomeFinder: Evaluating dynamic queries in a realestate information exploration system. Proc. of the 15th Annual International ACM SIGIR Conference on R&D in Information Retrieval, Copenhagen, 1992.

Dynamic Decision Support for Emergency Responders

for Simulation-Based COA Planning, Laboratory for AI. Research, The Ohio State ... http://www.cs.bath.ac.uk/hiins/acceptedpapers.html. 7. Furnas, G. W. ...

250KB Sizes 3 Downloads 37 Views

Recommend Documents

Dynamic Decision Support for Emergency Responders
Fire-rescue; right: http://www.surfcity- hb.org/images/users/fire/fire_rescue.jpg ... facts such as raw sensor data, map- based information, or alerts. ▫ Decision ...

PDF Counter-Terrorism for Emergency Responders ...
the 1990’s brought the reality of terrorism home and the Sept 11th tragedy let us know that the United States is a high priority target. The goal of ...

INTELLIGENT SYSTEMS FOR DECISION SUPPORT ...
small gap at T = 0 means that rules with T = 0 are excluded from being .... Uncertainties are emphasized here because according to Harvard Business .... identical situations and understand the same phrases differently when hearing or reading .... car

Clinical Decision Support
... noted Luke Hunter the president of Panthera at the organization’s blog “I know of ... Support: The Road to Broad Adoption Online , Read Best Book Clinical ... Clinical Decision Support, 2nd Edition explores the technology, sources of

Dynamic Dissimilarity Measure for Support-Based ...
Abstract—Clustering methods utilizing support estimates of a data distribution ... (For a mathematical ..... Pattern Analysis and Machine Intelligence, vol. 27, no.