Artificial Emotions: Are We Ready for Them? Jackeline Spinola de Freitas and João Queiroz Department of Computer Engineering and Industrial Automation School of Electrical and Computer Engineering State University of Campinas P.O. Box 6101 – 13083-852 Campinas, SP - Brazil {jspinola, queirozj}@dca.fee.unicamp.br
Abstract. Recent research in psychology and cognitive neuroscience are increasingly showing how emotion plays a crucial role in cognitive processes. Gradually, this knowledge is being used in Artificial Intelligence and Artificial Life areas in simulation and cognitive processes modeling. However, it lacks a theoretical framework that allows them to deal with emotion. In addiction, regarding emotion-based computational projects, controversial questions concerning the nature, function, and mechanisms of emotions, that must be considered, are mostly neglected on researches. The objective of this article is to discuss some of these problems and to present references that can be useful in their solution. Key words: Emotion, Artificial Intelligence, Computational Simulation.
1 Introduction Recent findings in neuroscience regarding the mechanisms, functions, and nature of emotions ([1], [2], [3], [4], [5], [6], [7], [8], [9]) have been attracting the attention of researchers in Computer Science and Artificial Intelligence (AI). It is believed that emotions play a significant role in diverse cognitive processes, and are essential for problem solving and decision making. Despite Darwin’s indications made in [10] that emotions are phenomena important for survival, only recently has the association of emotion with reason and logical behavior in human beings, been reviewed [9], [11], [12]. Currently, AI and Artificial Life rely on the emotion aspects that are crucial to model perception, learning, decision processes, memory, and other functions. In Computer Science, there are two branches of research exhibiting interest in emotion aspects. The first, Human-Computer Interaction (HCI), concentrates its attention on the interactions between the user (human) and the machines, and considers possible optimizations of this relationship. The main objective pursued by its researchers is the development of engineering tools to measure, model, and provide responses to human emotions through sensors, algorithms, and hardware devices. Affective Computing, coined by [13], is the term used to classify projects in this category. The most notable authors in HCI are: [13], [14], [15], [16] and the Labmedia Massachusetts Institute of Technology research group. Some of their projects include the Socially Intelligent Character, Embodied Conversational Agents, and Story Telling Systems. An example of a successful commercial project is Sony AIBO (http://www.sony.net/Products/aibo). F. Almeida e Costa et al. (Eds.): ECAL 2007, LNAI 4648, pp. 223 – 232, 2007. © Springer-Verlag Berlin Heidelberg 2007
224
J. Spinola de Freitas and J. Queiroz
The second branch of research involves Intelligent Agents systems, which have their internal architectures based on emotions (emotion-based systems). These systems’ models are biologically inspired processes described and studied by neuroscientists [11], [17], [18], [19], [20], [21], [22], [23]. Its main objective is to emulate emotion process in agents’ behavior. In general, emotion-based projects aim to improve systems performance in decision making, action selection, behavior control, trustworthiness, and autonomy. Since both of the above mentioned branches of research are new, their projects still confront basic problems. We are particularly interested in the second research branch, and we intend to discuss some of the problems faced by it. Looking at the more developed emotion-based computational systems projects, it can be said that the construction of such systems is far from trivial. In AI, to approach emotion concepts is as problematic and complex as life approach in Artificial Life. We have noticed that the problems can be clustered into two groups. First, it lacks a scientific approach structure (theoretical framework) to deal with Artificial Emotion (AE). This is a known limitation and many authors have suggested ways of dealing with it (e.g., [6], [12], [17], [24], [25], [26], [27], [28], [29], [30]). However, there are few projects that had shown acceptable trustworthiness to be followed. As a direct consequence of the lack of an appropriate framework to model emotion, we see the lack of comparisons, not only between distinct projects, but also within the same project, to contrast the results obtained from emotion-based and non emotion-based experiments. Second, a close look at projects reports gives us a non-exhaustive list of important questions they should face to get more trustworthy results. To present these problems, but also to provide an indication of possible solutions, this work is organized in order to present, in our viewpoint, a significant list of the more important theoretical references in the diverse subjects displayed. The next section, for example, emphasizes the lack of adoption of a framework and suggests a possible approach. It also criticizes the lack of comparisons between projects. Section 3 suggests basic questions that current projects do not tackle and present possible directions to address them. Section 4 is dedicated to the final comments.
2 Framework for Emotion-Based Projects The difficulty to elaborate a framework to deal with AE, has done that hitherto none approach has proved being notably distinct and superior to others. Maybe the difficulty in the elaboration of a theoretical framework is associated with, current lack of information about emotion and its relation with other subsystems. As indicated in [22], it can be questioned if emotions, in artificial systems, have a potential role in action selection (what agent should do, based on its emotional state), adaptation (behavior alterations), social regulation (communication, exchange of information), sensory integration, in alarm mechanisms (similar reactions in consequence of critical situations for the agent), motivation, objectives management (creation of new goals or priorities changes), learning, focus of attention (selection of data to be process, based on emotional evaluation), memory control, strategic processing and internal model (what emotional state represents to the agent).
Artificial Emotions: Are We Ready for Them?
225
The elaboration of a model that is capable to control simultaneously all these functions can be an extremely complex task to currently developing projects. Suggestions have been proposed by [5], [6], [24], [25] and [30], based on the hypothesis that emotions have adaptive purposes. All of them agree that it is necessary to follow a functional view of emotions presented in natural systems, in which we will concentrate on. [24] propose a functional view on the role of emotions and its implications for an emotional intelligent agent project. The author believes that: (i) the emotion has an adaptive purpose and can contribute to improve the agent survival capacity and (ii) AE models must be based on emotion functions abstractions, and must be used to generate agent behavior. Among the functions that can be attributed to “emotions” and that allow computational implementations, the author mentions: (i) to assign event relevance according to system objectives, (ii) to detect the difficulties in the solution of the problems that impede reaching these objectives, (iii) to define priorities and resources according to goals and (iv) to allow parallel processing that guarantees system’ objectives. To [24], the models that establish a linkage between emotion, motivation and behavior, provided by a synthetic physiological structure, can allow conclusions as those found in her experiment [6]. In that case, it can be observed that an agent presents either an oriented behavior (goal or objective) or an opportunistic behavior, when variables associated to physiological structures (functional) are used as regulatory mechanisms of its behavior. One notable previous study [30] suggests that the understanding of the brain structure and the neural mechanisms embedded in neuromodulation1, present in human and other animals emotions led researchers ([1]; [2]; [3]; [31]) to believe that it is possible to “abstract from biology a functional characterization of emotion” ([30]). These researchers affirm that the interactions between amygdale and prefrontal cortex, and its influence in emotion generation are already well known, but they admit that the way computational systems can “take advantage of it remains an open question” ([30]). In [25]’ functional view it is necessary to understand emotions as dynamic patterns of neuromodulation, instead of patterns of neural activity, as it is currently considered. When compared to the most classic viewpoint, centered in neural circuits [1], [2], [3], it is a distinct approach to characterize emotions origin and emotions impact in behavior and cognitive processes. Based on [32]’ analysis of motivation and emotion, [30] propose that the dynamic of neuromodulators systems serve as an inspiration to generate the functional structure for emotion-based architectures. Even though [30] emphasize that “emotions are, of course, far more complex than a few brain structures” and interaction of neuromodulators systems, authors believe that if systems whose architecture contained functional characteristics were used in computational simulations, it would be possible to modify its parameters to generate a behavior analogous to the one generated in an emotional state. The results from a previous study [25] provides a list of functional properties of the emotion, such as the change in the autonomous and endocrinal systems, impulse of motivational behavior, communication, social interaction, improvement of the survival capacity, and better memory storage and recovery. Although the author proposes a functional vision, his study also lists a few functions of emotions that “have natural robotics counterparts” [25]. The researcher affirms that it is possible to endow a robot 1
http://www.neuromodulation.com/
226
J. Spinola de Freitas and J. Queiroz
or a computational system with characteristics that can be functionally related to emotions, making the elaboration of a project or experiment much easier. That is the first hurdle in the challenge of including emotion associated functions, useful in the functioning and performance of the system, in computational models. In any given project, establishing a restriction on the communication between two interfering agents in order to get a desired emotional behavior out of the system could be an undesirable limitation. In such a case, it would be better not to apply the restriction, even though it may cause the system’s behavior to seem unnatural. Although complex, the functional framework seems computationally implementable and similar to natural systems and can produce convincing emotion-based projects. As examples, we can mention some projects that abstract physiological components from the animals, as hormonal levels: [21], [26] and [33]. Even though, these authors assume that they are not worried about the plausibility (bio-inspiration) of these systems. Also, in [22], author says that in the emotional control of systems processes he does not claim for any “biological plausibility of the employed states or mechanisms” This can be seen as an initial approach of the emotion concept, and can help to test theories about psychological processes [12]. Certainly, some projects (e.g. [22]) observe the phenomenon, comparing it with what could be classified as emotion. Even though the use of concepts and knowledge related to emotion does not mean to endow a computational system with emotional characteristics, we believe it is necessary to maintain somehow any plausibility, in a way that it does not be just a programmer who works metaphorically, attributing concepts related to human emotion to computational internal variables [34]. In fact, some projects deserve a severe critic related to the terms they use, assigning, many times, emotions names to system parameters without any abstraction of natural systems. As [24] and [35] question, it is difficult to establish a limit between a ‘genuine emergent’ behavior and the one conferred by a tendency of an observer anthropomorphism. It is possible that the lack of an appropriate framework to model emotion is the reason why we scarcely see comparisons between distinct projects or within the same project. A systematic analysis of the projects is necessary to get progress in artificial emotion research [24]. Interactions between the experiments could be useful to compare and to discuss different architectures and eventually, to benefit projects in progress, generating more expressive results in lesser time. Also, they can be useful in providing insights on the way research is being executed, and on the use of different approaches. Comparisons between emotion-based and non emotion-based architectures, in replicated experimental protocols, can be an efficient way to validate conclusions.
3 Some Questions to Emotion-Based Computational Systems and Possible Ways to Address Them As a new research area, we can notice that there are much more questions than solved problems, a fact that can be seen as an opportunity for new proposals. To answer them, as [34] suggests, the efficient tradition of basic scientific research, to return to the premises and initial principles in order to better understand the subject must be followed. Finally, the development of computational implementations can help to solve or supply new ideas to them. The proposed questions for emotion-based compu-
Artificial Emotions: Are We Ready for Them?
227
tational systems can be grouped in two types, related to (i) theoretical-conceptual problems and (ii) computational problems. Initially, scientific communities do not have a consensual definition of emotion. Questions related to the origins and functions of emotions and their relation with other affective processes (motivation, mood, attitudes, values, temperament, etc), also seems hardly consensual. These facts suggest that an understanding of the mechanisms involved in the emotion phenomenon may limit the development of emotionbased systems. A solution to surpass them is strictly functional: instead of “what emotions are”, we must concentrate on “what emotions are for” [25]. To create emotion-based architectures and projects, a list of important questions must include: how many and which emotions must be selected? Is it possible to have a feasible model that considers the co-occurrence of artificial emotions? ‘How many’ and ‘which’ emotions one might select are questions on which there is no consensus. Four ‘basic emotions’ are used in many systems: happiness, sadness, anger and fear (see [11], [21]). According to [36] this number must be bigger (15): happiness, sadness, anger, boredom, negation, hope, fear, interest, contempt, disgust, frustration, surprise, pride, shame and guilt. [7] believes that eight emotions must be used. They are classified by the author as “primary emotions”: happiness, sadness, acceptance, anger, fear, disgust, anticipation and surprise. Contrarily, [37] argue that from the theoretical point of view it is a fallacy to compare affective states with a predetermined number of basic emotions. Strengthening this argument, Paul Ekman, the creator of the term “basic emotions”, has admitted later on that there aren’t non-basic emotions [38]. Due to the need of a complex control system, some projects (e.g. [22], [39]) seem inclined to select only one or two emotions. The most parsimonious suggestion is seen in [24]: “Do not put more emotion in your system than what is required by the complexity of the system-environment interaction”. What is the relation between emotion and other subsystems? How to integrate it with other mechanisms (sensory, learning, selection, reaction and communication)? [20] question the lack of integration between emotions and other systems (cognitive, language, memory, etc), and how it harms the attainment of better global results. [24] affirms that due to the possibility of interaction between emotional mechanisms and diverse cognitive subsystems, such as the physiological, the cognitive-evaluative and the communicative-expressive, it is possible to get an interesting solution to improve system agent’s performance. However the author questions the need of such complexity, and the current plausibility of its implementation. According to recent research in psychology [40] and neuroscience [2], [3] and [9], emotions are processes that control cognition and action, and manage our mental models. In its basis, we could ask if its shortfalls compromise cognitive abilities in autonomous agents. For [25] it is clear that “emotions have co-evolved with perceptual, cognitive and motor abilities.” and that affect “all levels of functions, from lowlevel motor control to planning and high-level cognition”. In this sense, comparisons between emotion-based projects and no emotional ones can be useful to supply us with information about the connections that are established between emotion and cognition. Also, it serves to verify if the inclusion of emotion improves or not cognitive abilities in computational systems. Some questions that might be especially interesting in Artificial Life projects are associated with emergent phenomena [22], [24], [41]: Can Artificial Emotion be an
228
J. Spinola de Freitas and J. Queiroz
emergent property? If “yes”, how architecture design can influence on the emergency of complex actions in emotion-based agents? [24] affirms that it is possible that an emotion emerges from an agent, and that this is a feasible way to investigate the role emotions plays in agent-environment interaction, in distinct complexity levels. The author suggests that, to prevent problems, some functional equivalence between agent characteristics and its environment must be preserved. In this context, the problems are related to anthropomorphism tendency but, indeed, since an artificial system can represent existing natural models, it can be difficult to affirm “why and when emotional behavior arises” [24]. Besides that, one can question if the system will provide enough mechanisms to explain and justify a supposed emergency of emotional behavior [42]. [23], [25] affirm that typical explanations for the emotion function are based on the flexibility of agent behavior response to its environment. [5] defines the core of an emotion as the disposal to act in a pre-defined way. These theories are motivations to use behavior as a phenomenon to measure emotion. In fact, due to the lack of formal theories that describe the non-observable emotion subjective process [21] or intuitive parameters [42], many experiments [23], [26], [43] identify the emotion through any observable resultant behavior. Probably, one of the most frequent and investigated questions is: emotional processes need to be related to an embodied entity? It is possible that the motivation to find its answer has appeared due to a recent change in the traditional view that intelligence is an abstract process that could be studied without taking into consideration the physical aspects of natural systems [44]. Great part of current research defends that, to obtain intelligent behavior it is necessary that an agent is situated in the environment and interacts with it. Besides being important for intelligence, in light of [2], [3] and [9], the body is essential for the emotion. [41] defends a contrary argument when the question involves a computational apparatus. As an intermediary viewpoint, [18] believes that “emotion systems involved in feedback control of situated agents may serve to provide the grounding for embodied agents in the body/environment coupling”. Related to computational problems it is possible to identify other questions. Some problems may be related to the wrong way we program agents’ systems. Related to system architecture, one can ask: what type of data structure and computational mechanisms might be used to capture and represent the complexity of emotion processes? What architecture models are the best for the comparison of agent performance? The available programming languages and program syntax, with algorithms conceived by a program designer, in a great part of the cases, limits or disables any type of code evolution. It also limits that the agent can develop, autonomously, parts of its architecture during environment interactions. [45] affirms that it is fundamental to surpass the challenges of identifying the appropriate methods of information codification to produce a process of incremental growth. In order to get incontestable resulted, what type of experimental test allows better exploration of emotion-based models? Particularly, we feel that there is something missing related to computational tools to represent emotion phenomenon. While we do not get new advances in terms of algorithms or computational tools, we must concentrate on the modeling of the emotion, to the point not to lose the most relevant characteristics, and not to produce an extremely complex scenario that does not allow a computational representation of the phenomenon.
Artificial Emotions: Are We Ready for Them?
229
As we have said at the beginning, this section shows a not-exhaustive list of problems faced for emotion-based systems projects. Answering them is a complicated task, since it requires concepts that are not yet entirely understood and theories that are not well established [24], [46]. The notable complexity of a projected system to take care of such interdisciplinary problems, including computational parameters necessary to control their multiple factors, is probable one of the reasons AE research faces little development when compared with other equally novel areas. Many of these questions are probably made at the beginning of project developments but curiously, are not the focus of publications in the AI and Artificial Life areas. This difficulty can be one of the reasons for which we rarely see references to previous approaches in experimental projects. Certainly, tough questions require a broad and deep multidisciplinary background or a research group that might include psychologists, ethologists, neuroscientists, computer scientists, software engineers and philosophers. Even though it does not guarantee that it is possible to have a single model that responds to the majority of questions, attempts to answer these questions can also serve to show other limitations that emotion-based research might face, helping to surpass them.
4 Final Comments Even though currently available knowledge about emotions has led AI and Alife researchers to propose models of emotion-based systems, an essential question is still left to be answered; that is, to what extent can researchers abstract and model the supposed structural complexity involved in emotion phenomena. Indeed, the lack of appropriate frameworks for common reflection and standards for a sound validation practice is a restriction that needs to be surpassed ([47]). Constructive critics and projects comparisons commonly made in any research field can be a beneficial counterpart to experimental progress and development. Our hope is that the development of computational models of emotion becomes a core research focus for Artificial Intelligence, and soon we can see many advances in such systems. As more neuroscience findings are published, it will become easier to construct emotion-based agent systems. Computational projects with a specific focus will be able to extend their scope to include subsystems hardwired to specific emotions. On the other hand, conducting more emotion-based computational experiments will better our knowledge of unknown mind functions and provide a test-bed for theories of biological emotion ([30]). The extent to which researches in AI and Alife will improve our understanding on mind phenomena and allow us to develop new robust and trustworthy artifacts will depend on the extent to which we will be able to answer the remaining open questions. Positively, overcoming this challenges can be an important step in that field’s progress beyond engineering applications towards a more scientific discipline ([47]), allowing that the answer to our title is a ‘yes’. Acknowledgments. João Queiroz is sponsored by FAPESB/CNPq. Jackeline Spinola and João Queiroz would like to thank the Brazilian National Research Council (CNPq) and The State of Bahia Research Foundation (FAPESB).
230
J. Spinola de Freitas and J. Queiroz
References 1. Ledoux, J.: The emotional brain: the mysterious underpinnings of emotional life. Touchstone, New York (1996) 2. Damásio, A.R.: Descartes’error: emotion, reason and the human brain. Avon books, New York (1994) 3. Damásio, A.R., Grabowski, T., Bechara, A., Damásio, H., Ponto, L.L., Parvizi, J., Hichwa, R.D.: Subcortical and cortical brain activity during the feeling of self-generated emotions. Nature Neuroscience 3(10), 1049–1056 (2000) 4. Nesse, R.M.: Computer emotions and mental software. Social Neuroscience Bulletin 7(2), 36–37 (1994) 5. Frijda, N.H.: The place of appraisal in emotion. Cognition and Emotion 7, 357–387 (1993) 6. Cañamero, D.: Modeling Motivations and Emotions as a Basis for Intelligent Behavior. In: Proceedings of 1st International Conference on Autonomous Agents, Marina Del Rey, California, pp. 148–155. ACM, New York (1997) 7. Plutchik, R.: Emotion: a psycho evolutionary synthesis. Harper & Row, New York (1980) 8. Ghiselin, M.T.: Darwin and Evolutionary Psychology. Science 179, 964–968 (1973) 9. Damásio, A.R.: Emotion and the Human Brain. Annals of the New York Academy of Sciences 935, 101–106 (2001) 10. Darwin, C.: The Expression of the Emotions in Man and Animals. University of Chicago Press, Chicago (1872/1965) 11. McCauley, T.L., Franklin, S.: An architecture for emotion. In: Cañanero, D. (ed.) Proceedings of the 1998 AAAI Fall Symposium on Emotional and Intelligent: The Tangled Knot of Cognition. Technical Report FS-98-03, pp. 122–127. AAAI Press, Menlo Park (1998) 12. Gratch, J., Marsella, S.: A domain independent framework for modeling emotion. Journal of Cognitive Systems Research 5, 269–306 (2004) 13. Picard, R.: Affective Computing. MIT Press, Cambridge (1997) 14. Reilly, W.S., Bates, J. (1992) Building Emotional Agents. Technical Report CMU-CS-92143, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA (1992) 15. Elliott, C.: Hunting for the holy grail with “emotionally intelligent” virtual actors. SIGART Bull. 9(1), 20–28 (1998) 16. Cassell, J.: Oral tradition, aboral coordination: building rapport with embodied conversational agents. In: Proceedings of the 10th international Conference on intelligent User interfaces, San Diego, California, ACM Press, New York (2005) 17. Velásquez, J.: Modeling emotion-based decision-making. In: Cañanero, D. (ed.) Proceedings of the 1998 AAAI Fall Symposium. Emotional and Intelligent: the Tangled Knot of Cognition, pp. 164–169. AAAI Press, Menlo Park (1998) 18. Nehaniv, C.: The First, Second, and Third Person Emotions: Grounding Adaptation in a Biological and Social World. In: Numaoka, C., Cañamero, D., Petta, P. (eds.) Grounding Emotions in Adaptive Systems. 5th International Conference of the Society for Adaptive Behavior (SAB’98), pp. 43–47. University of Zurich, Switzerland (1998) 19. Custódio, L., Ventura, R., Pinto-Ferreira, C.: Artificial emotions and emotion-based control systems. In: Proceedings of 7th IEEE International Conference on Emerging Technologies and Factory Automation, vol. 2, pp. 1415–1420. IEEE, Los Alamitos (1999) 20. Petta, P., Cañamero, D. (eds.): Grounding emotions in adaptive systems: volume II. Cybernetics and systems: an international journal 32(6), 581–583 (2001) 21. Gadanho, S.C., Hallam, J.: Emotion-triggered learning in autonomous robot control. Cybernetics and Systems: an International Journal 32(5), 531–559 (2001) 22. Scheutz, M.: Useful roles of emotions in artificial agents: a case study from artificial life. In: Proceedings of AAAI 2004, pp. 42–48. AAAI press, Menlo Park (2004) 23. Kato, T., Arita, T.: Evolutionary Simulations based on a Robotic Approach to Emotion. In: The 10th International Symposium on Artificial Life and Robotics, Oita, Japan, pp. 258– 261 (2005)
Artificial Emotions: Are We Ready for Them?
231
24. Cañamero, D.: Emotions and adaptation in autonomous agents: a design perspective. Cybernetics and systems: International Journal 32, 507–529 (2001) 25. Fellous, J-M.: From Human Emotions to Robot Emotions. In: Hudlicka, E., Cañamero, D. (eds.) Architectures for Modeling Emotions: Cross-Disciplinary Foundations. AAAI Spring Symposium, pp. 37–47. AAAI Press, Menlo Park (2004) 26. Gomi, T., Ulvr, J.: Artificial Emotions as Emergent Phenomena. In: Proceedings of 2nd IEEE Workshop on Robot and Human Communication, Tokyo, Japan, pp. 420–425. IEEE Computer Society Press, Los Alamitos (1993) 27. Velásquez, J.: When Robots Weep: Emotional Memories and Decision-Making. In: Proceedings of the 15th National Conference on Artificial Intelligence, Madison, WI, pp. 70–75 (1998) 28. Staller, A., Petta, P.: Towards a Tractable Appraisal-Based Architecture for Situated Cognizers. In: Numaoka, C., Cañamero, D., Petta, P. (eds.) Grounding Emotions in Adaptive Systems. 5th International Conference of the Society for Adaptive Behavior (SAB’98), August 1998, pp. 56–61. University of Zurich, Switzerland (1998) 29. Sloman, A.: How many separately evolved emotional beasties live within us? In: Trappl, R., Petta, P., Payr, S. (eds.) Emotions in Humans and Artifacts, pp. 35–114. MIT Press, Cambridge (2002) 30. Arbib, M.A., Fellous, J-M.: Emotions: from brain to robot. Trends in Cognitive Sciences 8(12), 554–561 (2004) 31. Dalgleish, T.: The emotional brain. Nature Reviews Neuroscience 5, 582–589 (2004) 32. Kelley, A.E.: Neurochemical networks encoding emotion and motivation: an evolutionary perspective. In: Fellous, J-M., Arbib, M.A. (eds.) Who Needs Emotions? The Brain Meets the Robot, pp. 29–77. Oxford University Press, New York (2005) 33. Gadanho, S.C.: Learning behavior-selection by emotions and cognition in a multi-goal robot task. Journal of Machine Learning Research 4, 385–412 (2003) 34. Arzi-Gonczarowski, Z.: AI Emotions: Will One Know Them When One Sees Them? Agent Construction and Emotions. In: Trappl, R. (ed.) Cybernetics and Systems, pp. 739–744. Austrian Society for Cybernetcs Studies, Vienna, Austria (2002) 35. Grand, S., Cliff, D., Malhotra, A.: Creatures: Artificial Life Autonomous Soft-ware Agents for Home Entertainment. In: Proceedings of 1st International Conference on Autonomous Agents, Marina Del Rey, California, pp. 22–29. ACM, New York (1997) 36. Ortony, A., Clore, G.L., Collins, A.: The cognitive structure of emotions. Cambridge University Press, New York (1988) 37. Petta, P., Trappl, R.: Emotions and agents. In: Luck, M., Mařík, V., Štěpánková, O., Trappl, R. (eds.) ACAI 2001 and EASSS 2001. LNCS (LNAI), vol. 2086, pp. 301–316. Springer, Heidelberg (2001) 38. Ekman, P.: Basic Emotions. In: Dalgleish, T., Power, T. (eds.) The Handbook of Cognition and Emotion, pp. 45–60. John Wiley & Sons, Sussex (1999) 39. Delgado-Mata, C., Aylett, R.S.: Emotion and Action Selection: Regulating the Collective Behaviour of Agents in Virtual Environments. In: 3rd International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 3, pp. 1304–1305 (2004) 40. Oatley, K.: Emotions. In: Wilson, R.A., Keil, F.C. (eds.) The Massachusetts Institute of Technology - Encyclopedia of the Cognitive Sciences, pp. 273–275. MIT Press, Cambridge (1999) 41. Sloman, A., Chrisley, R., Scheutz, M. (2003) Who needs emotions? The brain meets the machine. In: Arbib, M. and Fellous, J. (eds.) Oxford University Press, New York (2003) 42. Wehrle, T.: Motivations behind modeling emotional agents: whose emotion does your robot have? In: Numaoka, C., Cañamero, D., Petta, P. (eds.) Grounding emotions in adaptive systems. 5th International Conference of the Society for Adaptive Behavior (SAB’98), August, 1998, University of Zurich, Switzerland (1998)
232
J. Spinola de Freitas and J. Queiroz
43. Kitamura, T.: An Architecture of Behavior Selection Grounding Emotions. In: Numaoka, C., Cañamero, D., Petta, P. (eds.) Grounding Emotions in Adaptive Systems. 5th International Conference of the Society for Adaptive Behavior (SAB’98), August, 1998, University of Zurich, Switzerland (1998) 44. Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge (1999) 45. Nolfi, S., Floreano, D.: Synthesis of autonomous robots through artificial evolution. Trends in Cognitive Sciences 1, 31–37 (2002) 46. Sloman, A.: What are emotion theories about? In: Hudlicka, E., Cañamero, D. (eds.) Architectures for Modeling Emotions: Cross-Disciplinary Foundations. AAAI Spring Symposium, pp. 128–134. AAAI Press, Menlo Park (2004) 47. AAAI press. Architectures for modeling emotion: cross-disciplinary foundations. In Hudlicka, E. and Cañamero, D. (eds.) 2004 AAAI Spring Symposium Technical Report (2004) Available in: http://www.aaai.org/Press/Reports/Symposia/Spring/ss-0402.php Accessed in 11/05/2006 48. Labmedia – Home page of the research group of the Massachusetts Institute of Technology: http://affect.media.mit.edu/projects.php