Modeling Emotion-Influenced Social Behavior for Intelligent Virtual Agents Jackeline Spinola de Freitas1,2, Ricardo Imbert2, and João Queiroz1,3 1 School of Electrical and Computer Engineering State University of Campinas P.O. Box 6101 – 13083-852 Campinas-SP, Brazil 2 Computer Science School, Universidad Politécnica de Madrid, Campus de Montegancedo, s/n, 28660 Boadilla del Monte, Madrid, Spain 3 Research Group on History, Philosophy, and Teaching of Biological Sciences, Institute of Biology, Federal University of Bahia, Salvador-BA, Brazil {jspinola,queirozj}@dca.fee.unicamp.br,[email protected]

Abstract. In the last decades, cognitive and neuroscience findings about emotion have motivated the design of emotional-based architectures to model individuals’ behavior. Currently, we are working with a cognitive, multilayered architecture for Agents, which provides them with emotion-influenced behavior and has been extended to model social interactions. This paper shows this architecture, focusing on its social features and how it could be used to model emotion-based agents’ social behavior. A model of a prey-predator simulation is presented as a test-bed of the architecture social layer. Keywords: Emotion-Based Architecture, Intelligent Virtual Agents, Social Interaction, Computational Simulation.

1 Introduction Brand-new findings in neuroscience concerning the mechanisms, functions, and nature of emotions have promoted a review of the association between emotion, reason and logical behavior in human beings [1], [2], [3], [4]. Based on the evidence that emotions play a significant role in diverse cognitive processes, and that they are essential for problem solving and decision making, [1], [5], [6], [7], [8], Computer Science and Artificial Intelligence have started to model perception, learning, decision processes, memory, and other functions through emotion-based agents. One particular application of these studies is the design of emotion-based architectures to model individuals’ behavior. We have been working with a cognitive, multilayered architecture for agents, called COGNITIVA, which provides them with emotion-influenced behavior. It includes a flexible model that allows setting up dependencies and influences among elements such as personality traits, attitudes, physical states, concerns and emotions on agents’ behavior. The architecture is generic, but provides mechanisms that allow adapting it to specific contexts through a progressive specification. COGNITIVA has been used to model agents, and more particularly, Intelligent Virtual Agents (IVAs) in contexts of A. Gelbukh and A.F. Kuri Morales (Eds.): MICAI 2007, LNAI 4827, pp. 370–380, 2007. © Springer-Verlag Berlin Heidelberg 2007

Modeling Emotion-Influenced Social Behavior for Intelligent Virtual Agents

371

very different nature, such as virtual auctions bidders agents in Vickrey Auctions [9], virtual zebras and lions in a 3D virtual African savannah [9], [10] and to model virtual storytelling characters [11]. The resulting behavior of emotion-influenced IVAs in such experiments was proper and coherent, showing that the inclusion of emotion in such systems does not entail loss of control. Now, COGNITIVA is been tested in social contexts, in which emotions have been considered of paramount importance. For instance, [12] argue that recent advances in the understanding of the intrapersonal characteristics of emotions have facilitated the complementary study of the interpersonal functions of emotions and research has begun to address the consequences of emotion beyond the individual, focusing on the ways emotions are embedded within ongoing interactions [13], [14], [15], [16], [17]. [14] and [16] state that emotions are dynamic, relational processes that coordinate the actions of individuals in ways that guide their interactions toward more preferred conditions and thus organize behavioral and cognitive response within the individual as well as interactions between individuals [14], [18], [19]. With such background knowledge, we have been working to include the appropriate psychological concepts also at the social layer of COGNITIVA, to model IVAs emotion-based behavior in social contexts, improving their believability and, even, their performance. The number of emotion-based architectures that contemplates social interaction has increased in the last two decades. However, differently from COGNITIVA, most of them focus mainly on user-agent interaction, like chatter bots, virtual storytelling characters and life-like characters for games and user interfaces. Additionally, most of them are context-dependent and do not allow their validation as a generic architecture ([20], [21]) or try to imitate unknown brain processes ([22]). In order to discuss our research, in the following section, COGNITIVA and its main components are succinctly described (details in [10], [11]). The social layer of this architecture is totally connected with all of these components, but, since it is new, we have opted for presenting most of its characteristics separately, in section 3. Section 4 describes the social simulation context that serves as test-bed for the new features of the architecture. Finally, in section 5 we present some final comments.

2 A Review of COGNITIVA As we have mentioned, COGNITIVA is a multilayered agent-oriented architecture and covers several kinds of behavior: reactive, deliberative and social. Each behavior comes from a related architecture layer, such that: ƒ The reactive layer is responsible for providing the agent with immediate responses to changes in its environment; ƒ The deliberative layer provides the agent with goal-directed behavior, from the individual perspective and abilities the agent has per se; ƒ The social layer also provides the agent with goal-directed behavior, but also considering the existence of other agents, the interaction with them and the potential use of their abilities to achieve personal and/or global goals. The cognitive architecture receives inputs through an Interpreter, which filters and transforms the perceptions coming from the sensors into understandable units (percepts) that can be sent to the rest of the cognitive processes.

372

J.S. de Freitas, R. Imbert, and J. Queiroz

IVA’s internal representation of every information source is called Beliefs, including knowledge about the environment (places and objects), about other IVA (individuals), and, even, about the IVA itself. For their management, COGNITIVA structures them into a taxonomy, structured as follows: ƒ Beliefs related to Defining Characteristics (DC), which describe the general traits of places, objects and individuals, and are important to identify them; ƒ Beliefs related to Transitory States (TS), characteristics whose values represent the current state of places, objects and individuals; ƒ Beliefs related to Attitudes, parameters that determine the behavior of an IVA towards other environmental components (places, objects and individuals). A subset of IVA’s Beliefs is related to the IVA itself and is fundamental to the architecture workings. This subset constitutes what is called the IVA’s Personal Model, and contains representation of personality traits, moods, physical states, attitudes and concerns. These components are organized as: (i) DC such as personality traits, whose values determine coherent and stable behavior of the IVA; (ii) TS such as moods and physical states, identifying respectively the current state of the mind and the body of the IVA; and (iii), its attitudes towards others. COGNITIVA also proposes the use of concerns to represent the range of desirable values of the IVA’s Transitory States at a specific moment, particularly, for its emotions and physical states. Each concern has an associated priority and an upper and a lower acceptable thresholds. Deliberative and social processes also manage the concerns threshold values to control reactive actuation whenever it is needed [10]. Many of these elements are intertwined. The relationship among them reveals the influence that each one of them has on the others as follows: (i) Personality traits exert an important influence on moods. For instance, before the same dangerous situation, a courageous IVA might feel less fear than a pusillanimous one; (ii) The set of the IVA’s Attitudes has some influence on the moods it experiences. For example, the presence of a predator in an environment will increase prey fear, because of its attitude of apprehension towards predators; (iii) Personality traits influence Attitudes. E.g.: an altruist character may decide to interrupt its activities to serve another agent; (iv) Physical states influence moods. E.g., a hungry virtual animal might be more susceptible to irritation; (v) Personality traits exert some influence on concerns, and specify them according to individual characteristics. E.g., a courageous animal might have different upper and lower thresholds of risky avoidance than a coward one. The set of these special Beliefs represents the state of an IVA, and has a strong influence on its behavior. That is why we say that emotions are not considered just as a component that provides the system with some ‘emotional’ attribute; beliefs have been designed to influence decisively on the IVA’s behavior. During agent’s life cycle, an IVA must be capable of maintaining information about what has happened in previous states. It maintains its Past History that, along with perceptions and beliefs, is used to improve the IVA’s behavior selection. To update beliefs and past history, COGNITIVA incorporates the concept of expectations, which captures the IVA’s predisposition towards an event that has happened or can happen. The expectations about an event are valued in terms of its probability of occurrence (Expectancy) and the desirability of its occurrence (Desire). Confirmation or disconfirmation of the occurrence of a given event generates a rich set of emotions.

Modeling Emotion-Influenced Social Behavior for Intelligent Virtual Agents

373

The reactive layer responds immediately to events produced by changes in the environment through Reflex Processing and Conscious Reaction Processing. The deliberative layer explores the (autonomous) abilities an IVA has in order to achieve its objectives through goal-oriented behavior. Both the deliberative and the social layers base their operation on two central concepts: goals and plans. Goals represent the objectives the IVA intends to achieve and, thus, direct its behavior. Goals are characterized by (i) an objective situation pursued; (ii) the current state of the goal (planned, achieved, cancelled…); (iii) their importance; (iv) a time stamp (to check goal validity); and (v) an expiry time. Goals can be produced from two different perspectives: according strictly to the individual abilities of the IVA (Deliberative Goals Generator), or considering interactions with other IVAs, to make use of their abilities (Social Goals Generator). Deliberative and Social Goals Generators save their goals in a Goal set and the agent cyclically checks the validity of every active goal. To achieve the generated goals, agents outline plans. They consist of an ordered set of actions to be executed and some particular parameters to help the Scheduler organize and combine the proposed plans. Once the Deliberative and/or the Social Planner have drafted a plan, it is added to the IVA’s Plan set. From this moment on, a plan waits to be incorporated into the scheduler agenda, unless the IVA decides to eliminate it. Planners are also responsible for the maintenance of some coherence between deliberative/social processes and the reactive ones, by updating the concerns. Finally, the COGNITIVA Scheduler takes the actions proposed by the reactive, the deliberative and the social layers, scheduling them to be executed by agent’ effectors.

3 Social Layer The social layer has been designed to deal with the IVA’s social capacity. It uses and updates the same information structures that are accessed by the other two layers. It receives the interpretation of the perceptions provided by the Interpreter and provides the Scheduler with the selected actions that must be executed. Here, we will describe the new features that provide COGNITIVA with effective social capabilities. 3.1 Social Personal Model Concerning the social layer, one of the agent’s DC is the Roles Set, containing the services and abilities it possesses, can execute and provide to other agents. This set contains the roles of the agent, i.e., the capacity, function or task it can perform within the system it belongs to. Thus, we have added this new subset to the Beliefs set previously described. In addition, agents may also know (not always necessary) the set of the roles present in the system. The knowledge about distributed roles, also part of DC, can be included by the IVA’s designer or be acquired throughout the agent’s interactions with others. This information enables the IVA to make plans involving other agents’ abilities. It is managed as a new DC structure called Distributed Roles Set.

374

J.S. de Freitas, R. Imbert, and J. Queiroz

3.2 Goals and Plans at the Social Layer In the deliberative layer, the knowledge needed by the IVA to generate plans to reach its goals is ‘inside’ it, i.e., it should have all the necessary resources to do its plans. However, in the context of the social layer it is possible that a goal is planned counting on the roles provided by other system’s agents, including the case in which it can only be reached through the participation of others. We have extended COGNITIVA with two ways of obtaining a plan involving other agents’ roles. The first one is when the agent already knows the roles distributed in the system and, thus, can include them in its plans. The agent Distributed Roles Set should contain roles that allow it to create at least one plan to reach the goal. We have called this direct planning. The second possibility is when the agent asks another agent to generate a plan for it, in this case, an indirect planning. This last approach makes possible agent learning and knowledge aggregation of other agents’ roles for future interactions, allowing the IVA itself to generate similar plans in the future. Both at COGNITIVA’s deliberative and social layers, it is foreseen that some unexpected situations may cause the re-planning of some of agent’s goals. In the social layer, this re-planning is more critical, given that the IVA may be too dependent on others to achieve its goals. When an agent re-plan one goal it generates a new sequence of actions until the plan succeeds or it decides that the plan is not appropriate/possible anymore. The main reasons why an action that is part of one of an agent’s plan needs to be re-planned are: (i) the provider agent of the failed action is not active/available at the moment; (ii) the requested action has less priority than the actions the provider agent is executing; and (iii) action’s expiry time has been achieved. If the problem is a provider operation failure and the IVA cannot look for another provider, it cannot do much but to wait the provider to be active again. However, if the problem is priority or expiry time, we have included some possible approaches: (i) to increase the action priority every time the agent needs to re-request the action from the same provider; (ii) to force the priority of the action to be higher if there are few agents that can provide the role associated with this action; (iii) to increase the action expiry time, if the agent perceives that it will need to wait for the time the –maybe unique– provider establishes. The strategy to be employed is context-dependent. In addition, it should be expected that the IVA may not reach some of its goals. At the time of generating plans for its goals the agent may not know which agents are able to provide it with the roles needed. However, the agent is able to look for an IVA that can be a provider of such roles. We have included in the Social Reasoner (an existing structure in the previous version of COGNITIVA) the charge of locating the role provider. Although approach choice is context-dependent, the most common ones for this task are: (i) Blackboard, where the agent that needs a role publishes its requests, (ii) Yellow Pages, where an agent announces the roles it can provide. Finding a role provider, however, does not guarantee its acceptance of collaboration. As we have said, there is a strong influence of the IVA’s emotional state on its behavior. Concerning the social layer, this influence can be even more noticeable since agents may have more opportunity to choose among multiple actions and interactions, i.e., at the time the agent selects an interactive strategy, the personal model exerts a decisive pressure. As an example, if an IVA needs the execution of a specific role, it may be conservative and look for the same IVA that has previously provided it with

Modeling Emotion-Influenced Social Behavior for Intelligent Virtual Agents

375

the role, or can be more intrepid and decide it wants to find out new role providers. If the previous interaction with a provider was not satisfactory, one agent can decide to change its plans in order to avoid a new interaction with that provider. The evaluation of an agent provider will be described later. Actions generated by the social layer have always less priority than the reactive ones. However, there is no a priori priority difference between the deliberative and the social layer actions. Once again, personality traits and moods will influence on priority definition, to decide if the IVA executes first a social or a deliberative action. 3.3 Evaluating Social Interaction The social layer allows an IVA to evaluate the other IVAs it interacts with. The more interaction a system promotes, the more an agent can evaluate others and, thus, delineate an attitude towards them. Although it seems to be a rich system mechanism, it must be optional, since we may face systems where evaluation is not important or may be useless and time consuming. At the current stage of our development, we are only considering direct evaluation between agents, i.e., evaluation from direct interaction between two agents. One evaluation an IVA makes of others may be related to role confidence. This internal evaluation, although may be interesting in some cases, may be also difficult to achieve, given that it presupposes that the IVA has a way of comparing the role provided with a pre-defined ‘product quality value’. Another kind of evaluation that can be done is related to service quality: how a provider agent answers to the agent’s requests. The ultimate intention is to use this information to decide which provider the IVA should choose the next time it needs the same role execution. We have defined some parameters that can be used to evaluate service quality: how many times a request has been sent before the role has been provided, how long a provider has been offering a role to an agent, how much time is spent between a request and the provider’s answer, does it overpass the mean time to service provision?, just to mention some of them. Whatever evaluation is done, it will influence on the agent behavior and on its interaction strategies within the system. 3.4 Social Contexts The inclusion of the social layer makes possible to apply the architecture to a great amount of applications and, more importantly, to diversified contexts. The architecture can be used in applications requiring cooperation, information exchange, coordination and, also, in environments where cooperation gives rise to negotiation or competition. This diversity affects the contextualization of the architecture regarding the possible interactions among IVAs: (i) agents may always cooperate, (ii) agents may say no to cooperation, (iii) agents may want to negotiate, (iv) there may be uncertainty about the behavior of the agent. Each of these aspects entails an increasing complexity, since the less the possibilities of executing successfully the agent’s plans, the more intricate will be its behavior: ƒ If a provider agent always cooperates, it is more likely that a plan is concluded successfully, without re-planning demands;

376

J.S. de Freitas, R. Imbert, and J. Queiroz

ƒ If there is a possibility that a provider agent does not cooperate, either because it is busy, unavailable or for any other reason, the probability that a plan fails is high. This situation will demand that the agent has enough flexibility or forecast ability to elaborate alternative plans in case of failure; ƒ If the environment demands negotiation, the required flexibility could be even greater, since the agent will have to evaluate the benefits and losses that it will have to manage during interaction; ƒ If there is complete uncertainty, the agent will have to be able to handle all the previous situations. Up to now, we have concentrated on cooperative aspects and also on the possibility that agents do not always cooperate and the IVA has to re-plan its goals. Once we have obtained proper results, we will extend the model to include negotiation and competition. As a summary, Figure 1 illustrates the internal representation of COGNITIVA, in which its layers and their intra-relations can be seen.

Fig. 1. Internal representation of COGNITIVA

4 Social Simulation The application context in which we are testing the extension of COGNITIVA, with the described social layer, is a predation simulation environment. Predation is considered one of the most important selective pressures on wild animals, which must detect threats before they cause damage to themselves or to their offspring [23], [24], [25]. Before we offer system description and experiment parameters, we will present some theoretical background that motivates us to use this simulation environment.

Modeling Emotion-Influenced Social Behavior for Intelligent Virtual Agents

377

To avoid been predated, an animal (prey) must be capable of early predation detection. Usually, an animal may detect threats on its own (individual vigilance) or may depend on signals from its associates (collective detection). According to [25], individual vigilance is more reliable and more rapid in thread detection but conflicts with other important activities, such as feeding, resting, grooming, fighting, mating, etc [23], [24]. Finding a way to manage this conflict is one of the reasons why many animal species aggregates. [25], [26] and [27] argue that the probability of being killed by a predator decreases with increasing group size, since associates can reveal they have detected a predator through escape behavior or by emitting recognizable alarms. According to [25], great part of aggregation studies is related to anti-predator vigilance that, in turns, emphasize predator avoidance over other scanning functions, such as food search, moving planning (e.g., to survey escape routers) and social learning, because predation risk is said to be one of the most important factors that shapes animal vigilance. Many researches concerning vigilance centers on the conflict between feeding and vigilance, because many animals need to lower their heads to eat and thereby it drastically narrows their visual field. Even those that can feed in an upright position or eat using their hands may still need to concentrate their attention on finding, harvesting, or processing their food [25]. All these arguments emphasize the importance that vigilance has among animal activities. For our simulation purposes, we are going to concentrate on collective vigilance. Collective vigilance lies in the alternation of the individual animals’ vigilance role to benefit the group, especially to solve vigilance conflicts with other essential activities. Based on many previous works, [25] affirms that “in cooperative groups, individuals may take specialized roles as sentinels […] their associates then monitor the sentinels, rather than surveying the surroundings themselves”. This king of cooperation is usually called “reciprocal altruism”, in which one organism provides a benefit to another in the expectation of future reciprocation and which utmost objective is specie preservation. Within a group, collective vigilance can diminish individual monitoring, but demands that animals evaluate and estimate the number and kind of activities of their associates before abandoning monitoring to dedicate themselves to other activity. This kind of interaction between agents is part of an associate’s social learning. Ethological references indicate that there are a lot of factors that could influence vigilance behavior. The most known are: age (closely related to previous experience), sex, reproductive status, social rank or special female status, such that of mothers with infants. Other factor is related to associates’ monitoring (within-group vigilance) to avoid food theft or conspecific threat, although they are still quite unknown. At this stage of our work, we have not included such factors yet, mostly because they could add unknown complexity and cause incompressible variations. Food theft, for example, is clearly a competition aspect that we do not want to merge, at this point of our research, with cooperation analysis. We expect that they can be included in subsequent experiments, when some conclusions have been already made and some parameters have been controlled. Once we eliminated all the variables that could make it difficult to analyze the outcomes, we are currently working on our simulation. The experiment consists in the interaction between a group of virtual animals, in this case, zebras, which can eat,

378

J.S. de Freitas, R. Imbert, and J. Queiroz

rest, wander and, more importantly from the social perspective, act as sentinel. Zebras’ main objective is early predator detection through environment scan. In addition, zebras’ individual goal is to maintain their energy reserves at a desired level, as well as their emotional parameters. As we could learn from theory, the most important emotion in the context of our experiment is fear. This is crucial to drive zebras to look for a secure state. Thus, the purpose of the experiment is to find out if collective vigilance maintenance can emerge from the (indirect) variation of emotional parameters (fear) and cooperative traits in virtual animals, i.e., if it is possible to simulate that, at least one of them will assume a sentinel role at a given moment, without an explicit definition of a minimal value for quantity of sentinels within the system. To achieve it, we have considered [28]’s arguments, that an animal will decide if it is going to assume a sentinel role based on two variables: (i) Its energy reserves: although vigilance is crucial for survival, a high level of hunger may switch animal’s priorities. So, in case that the hunger level is above an upper bound value, an animal will not act as a sentinel, and it will prefer to look for, find and process food; (ii) Its associates’ actions, i.e., what the others are doing. Knowing if any of them is already monitoring predators is decisive to determine an animal’s own action. These two variables are intertwined: an animal assumes a sentinel role if its energy reserves are greater than a lower threshold that, in turn, varies depending on its associates’ current role. In our simulation, this relation is defined as a variation of the physical parameters (hunger) threshold related to the perceived insecurity; each animal might have different threshold values. We also have defined that no-vigilance increases insecurity and thus IVA’s fear: it will influence on the animal decision to assume or not a sentinel role. In addition, we will vary the animal’s time interval to scan associates (to find if there are some of them watching over). To obtain this result, we have made some necessary assumptions to block undesired behavior. We assume that: (i) There will always be enough food for animals and they will not fight for it. The purpose is to prevent conspecific competition or within-group vigilance; (ii) Animals can be constantly eating or resting (its preferred activities). Our objective is to prevent that changes in an animal role (to become a sentinel) be based on IVA’s free time. Thus, an animal monitors predator threats if it decides to, not because there is nothing else better to do. One possible evaluation approach in our virtual scenario is to allow an IVA to evaluate the amount of time each virtual animal has spent as sentinels, compared to other agents’ time. It may mean, for example, that an animal is not as cooperative as others or may mean that the animal is not so good in playing the sentinel role or is thoughtless about its importance. In this first stage of the evaluation of our proposal, we have not included evaluation among animals. Although we have included differences in virtual animals’ personality traits that may cause more or less cooperation, animals are not aware of them and they assume that all cooperate equally. Some data we are collecting and analyzing are: (i) the number of times each animal has played the role of sentinel; (ii) the time each one has dedicated to this role; (iii) what are the reasons leading animals to decide to abandon the sentinel role; (iv) the number of sentinels along the experiment; (v) how aggregate were agents; (vi) the number of simultaneous sentinels along interactions. Besides evaluating our objective we expect that the simulation will allow us to evaluate some other results, such as if

Modeling Emotion-Influenced Social Behavior for Intelligent Virtual Agents

379

IVAs configured as more cooperative have dedicated more time acting as sentinel than the others, or if the agents have a tendency to be aggregated.

5 Final Comments and Ongoing Work Nowadays, there are few emotion-based architectures for controlling IVAs behavior that consider their social dimension. The results COGNITIVA has attained for managing individual behavior and its availability to consider social abilities are a good incentive to extend it with a social layer, contributing to fill that gap. The new social dimension of COGNITIVA will be tested through the evaluation that we are currently developing and that has been described above. We expect that the evaluation will allow us to come up with conclusions related to the influence that COGNITIVA’s emotional parameters have on individual behavior, as well as on the general system behavior. As future work, we intend to include a number of parameters we have mentioned above, but have not included in the system yet, mainly those related to competition, negotiation between agents. We also plan to use COGNITIVA to simulate other social environment that were previously simulated in a non-emotional agent-oriented architecture. We believe that it will be useful not only to compare architecture performance but also as an extra validation proof for our research. Acknowledgments. João Queiroz is sponsored by FAPESB/CNPq. Jackeline Spinola and João Queiroz would like to thank the Brazilian National Research Council (CNPq) and The State of Bahia Research Foundation (FAPESB).

References 1. Damásio, A.R.: Emotion and the Human Brain. Annals of the New York Academy of Sciences 935, 101–106 (2001) 2. Freitas, J.S., Gudwin, R.R., Queiroz, J.: Emotion in Artificial Intelligence and Artificial Life Research: Facing Problems. In: Panayiotopoulos, T., Gratch, J., Aylett, R., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, Springer, Heidelberg (2005) 3. McCauley, T.L., Franklin, S.: An architecture for emotion. In: Proceedings of the 1998 AAAI Fall Symposium. Emotional and Intelligent: The Tangled Knot of Cognition, pp. 122–127. AAAI Press, CA (1998) 4. Ray, P., Toleman, M., Lukose, D.: Could emotions be the key to real artificial intelligence? ISA’2000 Intelligent Systems and Applications, University of Wollongoog, Australia (2001) 5. Cañamero, D.: Modeling Motivations and Emotions as a Basis for Intelligent Behavior. In: First Conference on Autonomous Agents, pp. 148–155. ACM, California (1997) 6. Damásio, A.R., Grabowski, T., Bechara, A., Damásio, H., Ponto, L.L., Parvizi, J., Hichwa, R.D.: Subcortical and cortical brain activity during the feeling of self-generated emotions. Nature Neuroscience 3(10), 1049–1056 (2000) 7. Ledoux, J.: The emotional brain: the mysterious underpinnings of emotional life. Touchstone, New York (1996) 8. Nesse, R.M.: Computer emotions and mental software. Social neuroscience bulletin 7(2), 36–37 (1994)

380

J.S. de Freitas, R. Imbert, and J. Queiroz

9. Imbert, R.: Una Arquitectura Cognitiva Multinivel para Agentes con Comportamiento Influido por Características Individuales y Emociones, Propias y de Otros Agentes. Ph.D. Thesis, Computer Science School, Universidad Politécnica de Madrid (2005) 10. Imbert, R., de Antonio, A.: When Emotion Does Not Mean Loss of Control. In: Panayiotopoulos, T., Gratch, J., Aylett, R., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 152–165. Springer, Heidelberg (2005) 11. Imbert, R., de Antonio, A.: An Emotional Architecture for Virtual Characters. In: Subsol, G. (ed.) ICVS 2005. LNCS, vol. 3805, pp. 63–72. Springer, Heidelberg (2005) 12. Keltner, D., Kring, A.M.: A. M. Emotion, Social Function, and Psychopathology. Review of General Psychology 2(3), 320–342 (1998) 13. Averill, J.R.: A constructivist view of emotion. In: Plutchik, R., Kellerman, H. (eds.), pp. 305–339. Academic Press, New York (1980) 14. Campos, J., Campos, R.G., Barrett, K.: Emergent themes in the study of emotional development and emotion regulation. Developmental Psychology 25, 394–402 (1989) 15. Ekman, P.: An argument for basic emotions. Cognition and Emotion 6, 169–200 (1992) 16. Lazarus, R.S.: Emotion and adaptation. Oxford University Press, New York (1991) 17. Frijda, N.H., Mesquita, B.: The social roles and functions of emotions. In: Kitayama, S., Marcus, H. (eds.) Emotion and culture: Empirical studies of mutual influence, pp. 51–87. American Psychological Association, Washington (1994) 18. Ohman, A.: Face the beast and fear the face: Animal and social fears as prototypes for evolutionary analysis of emotion. Psychophysiology 23, 123–145 (1986) 19. Clark, C.: Emotions and the micropolitics in everyday life: Some patterns and paradoxes of Place. In: Kemper, T.D. (ed.), pp. 305–334. New York Press, Albany (1990) 20. Galvão, A.M., Barros, F.A., Neves, A.M.M., Ramalho, G.: Persona-AIML: An Architecture for Developing Chatterbots with Personality. In: Jennings, N.R., Sierra, C., Sonenberg, L., Tambe, M. (eds.) Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems, pp. 1264–1265. ACM Press, New York (2004) 21. Charles, F., Cavazza, M.: Exploring Scalability of Character-Based Storytelling. In: Jennings, N.R., Sierra, C., Sonenberg, L., Tambe, M. (eds.) Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems, pp. 870–877. ACM Press, New York (2004) 22. Delgado-Mata, C., Aylett, R.: Emotion and Action Selection: Regulating the Collective Behaviour of Agents in Virtual Environments. In: Jennings, N.R., Sierra, C., Sonenberg, L., Tambe, M. (eds.) Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems, pp. 1302–1303. ACM Press, New York (2004) 23. Dimond, S., Lazarus, J.: The problem of vigilance in animal life. Brain, Behavior and Evolution 9, 60–79 (1974) 24. Mooring, M.S., Hart, B.L.: Costs of allogrooming in impala: distraction from vigilance. Animal Behaviour 49, 1414–1416 (1995) 25. Treves, A.: Theory and method in studies of vigilance and aggregation. Animal Behavior 60(6), 711–722 (2000) 26. Bednekoff, P.A., Lima, S.L.: Randomness, chaos and confusion in the study of antipredator vigilance. Trends in Ecology and Evolution 13, 284–287 (1998) 27. Dehn, M.M.: Vigilance for predators: detection and dilution effects. Behavioral Ecology and Sociobiology 26, 337–342 (1990) 28. Bednekoff, P.A.: Coordination of safe, selfish sentinels based on mutual benefits. Annales Zoologici Fennici 38, 5–14 (2001)

Modeling Emotion-Influenced Social Behavior for Intelligent Virtual ...

Keywords: Emotion-Based Architecture, Intelligent Virtual Agents, Social In- teraction ... The number of emotion-based architectures that contemplates.

356KB Sizes 1 Downloads 131 Views

Recommend Documents

Behavior Modeling and Forensics for Multimedia Social ...
Within the past decade, Internet traffic has shifted dramatically from HTML text pages to ... provide an inexpensive, highly scalable and robust platform for distributed data ... satisfactory level of service, and a critical issue to be resolved firs

Intelligent Social Learning.pdf
regard to short-term effects of behavioural transmission. may have different long- or mid-term effects. Internet. access spread thanks to different mental processes ...

Modeling an Affectionate Virtual Teacher for e-Learning ...
emotion is associated with learning online. .... gestions of endowing computer tutors with a degree of ..... tions and Learning”, WEBIST 2005, Miami, USA, pp.

Wildland Fire Behavior Modeling
the computational domain that is 1 m on the side would require 1 billion grid cells or 1 TB ..... As affordable computers continue to improve in speed and memory.

social marketing strategies for changing public behavior pdf ...
social marketing strategies for changing public behavior pdf. social marketing strategies for changing public behavior pdf. Open. Extract. Open with. Sign In.

application of solid modeling in virtual manufacturing of ...
All the processes are developed on the platform of the 3D-STUDIO-MAX, which is one of the most important virtual tools. The software is developed using Max-.

Pro-social Behavior, Local Leaders and Access to Social Welfare ...
Feb 9, 2016 - Effective utilization of social welfare programs can have a .... Pradesh (see Figure 1 for the location of the villages).5 Four mem- ..... government, rely significantly on either local elites or within village networks to facilitate us

Collective behavior or social disorganization?
impacted community networks, via participation in voluntary ..... Security Administration poverty line; (2) percentage of ..... If so, is it a normative system defined by the gang ..... violence: The link between entry-level jobs, economic deprivatio

Social-Psychology-Behavior-Change-Group-Project-LaCaille.pdf ...
Social-Psychology-Behavior-Change-Group-Project-LaCaille.pdf. Social-Psychology-Behavior-Change-Group-Project-LaCaille.pdf. Open. Extract. Open with.

A Virtual Switch Architecture for Hosting Virtual ...
Software router virtualization offers more flexibility, but the lack of performance [7] makes ... Moreover, this architecture allows customized packet scheduling per ...

Parallax: Virtual Disks for Virtual Machines
Apr 4, 2008 - republish, to post on servers or to redistribute to lists, requires prior specific ... possible. File system virtualization is a fundamentally different ap-.

narrative virtual environment for children
Children find computer games extremely motivating and are often prepared ..... is a general theme of Harry's own efforts saving himself ... account and analysis).

Confucius and its intelligent disciples: integrating social with search
Q&A sites continue to flourish as a large number of users rely on them as useful ... for instance, in China, 25% of Google's top-search-results pages contain at least one link to some .... of the general NLP and AI research with more than ten years o

On The Social and Ethical Conditions of Virtual ... - Semantic Scholar
information technology will change human existence, especially our notions of ... society—virtual friendships, cyber-communities, virtual education, virtual .... someone from Kandahar or another tribe might well not be in a position to use a stick.

ECHO for - Virtual Community for Collaborative Care
ECHO. Colorado faculty, staff and partners have dedicated themselves to de- monopolizing knowledge in order to expand access to best-practice care.

On The Social and Ethical Conditions of Virtual ... - Semantic Scholar
all domains of everyday life have prompted much speculation about the way in which .... to face meetings, using actual names, referring to events and institutions ..... Wittel, A. (2001) ―Toward a network sociality‖, Theory, Culture and Society, 

Modeling the Antecedents of Proactive Behavior at Work - CiteSeerX
environment, are the most powerful way to obtain a proactive ...... ios designed for the context. ..... In addition to the tests above, consistent with calls for greater.

Intelligent real-time music accompaniment for ...
Email: [email protected] ... automatic music accompaniment to a human improviser. ... reviews and categorizes some of the automatic accompaniment.