Simulation-Based Training of Ill-Defined Social Domains: The Complex Environment Assessment and Tutoring System (CEATS) Benjamin D. Nye, Gnana K. Bharathy, Barry G. Silverman, & Ceyhun Eksin University of Pennsylvania Ackoff Center for Advancement of Systems Approaches 120B Hayden Hall, 3320 Smith Walk Philadelphia, PA 19104
Keywords: Hybrid Tutoring, Simulation-Based Learning, Assessment, Military Socio-cultural problems have special challenges that complicate training design. Problems in these domains have been called “wicked problems” due to their intractability [4]. Such problems are ill-defined: characterized by conflicting stakeholder values, disagreements over solutions, and interconnectedness between problems. Simulation-based learning can be used to explore these problems, but assessment is a bottleneck for training ill-defined domains. Problems in ill-defined domains are heterogeneous: some problems have clear right and wrong answers, but others are subjective, context-dependent, or emergent. A possible solution is hybrid tutoring, which combines multiple tutoring approaches [2]. A hybrid tutor could match different pedagogical interventions for different types of problems. However, hybrid tutoring lacks established design principles for matching domain problems with suitable interventions. The Complex Environment Assessment and Tutoring System (CEATS) follows two principles to support hybrid tutoring. First, semantic interfaces are used to decouple components, transforming the simulation environment into meaningful metrics. Assessments use metrics as evidence to calculate measures about domain concept qualities. The second principle is to support families of assessments. Together, this design decouples assessments from the simulation and embeds meta-data to make them meaningful for reporting and tutoring modules. The Complex Environment Assessment and Tutoring System uses metrics as a semantic API for the learning environment. This allows different environments (e.g. simulation vs database) to share the same metric specifications, but implement their own function and query implementations. A metrics engine currently exists for use with a real-time simulation (described below) and a second metrics engine is being added to support metrics on a database of simulation runs. In CEATS, assessments are implemented as relationships between metrics and domain knowledge. Assessments include meta-data on the objectivity, usage, frame-of-reference, assessment type (qualifier), and domain knowledge associated with the measurement. Assessment qualifiers determine the basic meaning of the assessment, such as different types of attitudes (e.g. like/dislike) or learning about concepts (e.g. mastery level). They also support assessments that designate
2
Training Ill-Defined Social Domains with CEATS
when an opportunity to demonstrate learning or preferences has occurred. Objective vs. subjective specifies whether the assessment measures an objective truth (e.g. math problem answer) or a subjective quality (e.g. favorite math operator). Frame of reference refers to what the measurement is compared against, which can be fixed criterion (e.g. standards-based), normed (e.g. compared to peers), or ipsative (e.g. compared against self, at other times or tasks). Usage refers to the intended usage of the assessment. Formative assessment is valid during a task and tends to focus on process, while summative assessment occurs after task completion and focuses on outcomes. The tutoring engine is currently under active development, targeting a hybrid design driven by assessment meta-data. At present, tutoring engine development is focusing on three complementary types of interventions: Error Feedback, Comparative Feedback, and Reflective Prompts. Error feedback will be driven by objective, criterion-based assessments. Comparative feedback will be employed where ipsative or normed assessments are available, such as comparing user performance against prior performance or comparing skills. When only subjective criteria are available, the system will fall back on questions that help the user reflect on their actions. This design is novel because hybrid tutoring will be driven by assessment meta-data, instead of ad-hoc pairing of pedagogy to problems. CEATS has been integrated with the StateSim simulation environment to support AtN counter-insurgency strategy training. The Department of Defense is currently supporting the “Attack the Network” (AtN) paradigm, which outlines strategies for kinetic and non-kinetic engagement of insurgent networks that finance, develop, and deploy improvised explosive devices [3]. Users implement courses of action in StateSim, an agent-based simulation focusing on interacting factions [5]. StateSim competed in the DARPA Integrated Crisis Early Warning System (ICEWS) project and forecasted measures of state and regional instability with over 80% accuracy [1]. Currently, the CEATS engine provides metrics and assessment capabilities for a StateSim Afghanistan AtN training scenario. Future work on CEATS will complete the tutoring engine, supporting training of the illdefined domain of counter-insurgency, and add authoring tools for assessments.
References 1. Bharathy, G.K., Silverman, B.G.: Validating agent based social systems models. In: Winter Simulation Conference (WSC) 2010. pp. 441–453. IEEE (2010) 2. Fournier-Viger, P., Nkambou, R., Nguifo, E.: Building intelligent tutoring systems for ill-defined domains. In: Nkambou, R., M.R..B.J. (ed.) Advances in Intelligent Tutoring Systems, pp. 81–101. Springer (2010) 3. NTC Operations Group: Attack the Network Handbook (May 2010) 4. Rittel, H., Webber, M.: Dilemmas in a general theory of planning. Policy sciences 4(2), 155–169 (1973) 5. Silverman, B.G., Bharathy, G.K., Nye, B.D., Kim, G.J., Roddy, M., Poe, M.: M&S methodologies: A systems approach to the social sciences. In: Sokolowski, J.A., Banks, C.M. (eds.) Modeling and Simulation Fundamentals: Theoretical Underpinnings and Practical Domains, pp. 227–270. Wiley & Sons, Hoboken, NJ (2010)