Reasoning and argumentation Hugo Mercier NON COPYEDITED VERSION, PLEASE DO NOT QUOTE

Reasoning can be understood as the private mental act of accepting or rejecting a conclusion based on reasons supporting or attacking this conclusion. For instance, Paul hesitates between going to the Japanese and to the Mexican restaurant, but then he remembers that he’s a bit hard up at the moment, and that the Japanese restaurant is more expensive; then he has a reason to pick the Mexican place. Argumentation, which involves reasoning, is a public act of using reasons to convince others and to evaluate others’ reasons to decide whether one ought to be convinced. Knowing that Lara is also on a budget, Paul could use the same reason to convince her that Mexican is the best option, and she could evaluate this reason to make sure it is sound. Historically, scholars have held widely diverging views of the relation between private reasoning and argumentation (see Dutilh Novaes, submitted). One old tradition, which we can call social, sees them as undistinguishable. Aristotle defines “deduction [as] a discourse” (Prior Analytics, 24b19) and for Isocrates “the same arguments which we use in persuading others when we speak in public, we employ also when we deliberate in our own thoughts” (Antidosis, 256, cited in Billig, 1996). Another tradition, which we can call individualist, was developed more recently, in particular following Descartes. This tradition contrasts argumentation and private reasoning, sometimes in order to highlight the superiority of private reasoning. In the following passage, Descartes equates the ‘logic of the Schools’ with a form of argumentation: After that, he [who aims to instruct himself] should study logic. I do not mean the logic of the Schools, for this is strictly speaking nothing but a dialectic which teaches ways of expounding to others what one already knows or even of holding forth without judgment about things one does not know. Such logic corrupts good sense rather than increasing it. I mean instead the kind of logic which teaches us to direct our reason with a view to discovering the truths of which we are ignorant. (Descartes, 1985, p. 186; cited in Dutilh Novaes, submitted) We find echoes of the social tradition in some contemporary research programs (see Billig, 1996). For instance, Piaget’s early work—in which he claimed that “logical reasoning is an argument which we have with ourselves, and which reproduces internally the features of a real argument” (Piaget, 1928, p. 204)— inspired researchers to study the role of argumentation in cognitive development (Doise & Mugny, 1984; Perret-Clermont, 1980). The individualist tradition, however, proved vastly more influential (including on Piaget’s later work). Until very recently the experimental study of reasoning essentially ignored argumentation—and, on their side, argumentation scholars showed little interest in the cognitive underpinnings of argumentative skills. However, the grip of the individualist tradition is loosening. The psychology of reasoning had largely focused on logical, and on particular on deductive arguments. Deductive arguments entail that a conclusion must necessarily be accepted once the premises are accepted. In everyday discourse, by contrast, a conclusion can be taken back when new evidence emerges, making the study of deductive reasoning ecologically dubious. The limitations of this so-called

‘deductive paradigm’ are now recognized (Evans, 2002). By contrast, probabilistic arguments—arguments that simply make a conclusion more believable and that are more ecologically valid—now receive more attention (Hahn & Oaksford, 2007). On the side of argumentation studies, scholars have begun to conduct experiments (Hahn & Hornikx, 2012) and to integrate cognitive psychology in their analyses (Herman & Oswald, 2014; Maillat & Oswald, 2013). A recent theory—the argumentative theory of reasoning—challenges the individualist tradition by suggesting that the function of reasoning is to argue, and that argumentation is more likely to improve one’s beliefs than solitary reasoning (Mercier & Sperber, 2011). In this theory, reasoning refers to the cognitive mechanism that deals with the relation between reasons and the conclusions they purportedly support. Given that most of cognition takes place with no attention being paid to reasons (Mercier & Sperber, 2009), this makes of reasoning a specific cognitive mechanism, one that only humans would possess. The argumentative theory of reasoning can be related to the work of other scholars who have suggested that argumentation plays a central role in moral reasoning (Gibbard, 1990), communication (Ducrot & Anscombre, 1983), social cognition (Billig, 1996), and human affairs more generally (Perelman & Olbrechts-Tyteca, 1958). However, it is the first theory to make full use of experimental psychology’s advances. The predictions of the argumentative theory can be used as a framework to make sense of the wide array of evidence bearing on the links between reasoning and argumentation. In order to spell out these predictions in more details, we must start with an outline of the evolutionary rationale for the theory. 1. Evolution, reasoning, and argumentation The individualist tradition sees reasoning as aimed at helping the lone reasoner produce sound beliefs, largely by realizing that one’s intuitions cannot be properly supported by reasons. Several scholars have attempted to give this tradition an evolutionary grounding (e.g. Stanovich, 2004). However, it is unclear how a mechanism whose failures even in simple tasks have been amply documented (e.g. Evans, 2002) could have evolved to correct intuitive mechanisms that perform, by and large, very well (e.g. Gigerenzer, Todd, & ABC Research Group, 1999). Moreover, evolutionary psychologists have forcefully argued that such domain general mechanisms face strong evolutionary hurdles that make their existence improbable (e.g. Cosmides & Tooby, 1992). The gist of their argument is that domain general mechanisms would be computationally intractable—they have to solve too many problems at once. By contrast, domain specific mechanisms use the specific regularities of their domain as computational shortcuts. It is more plausible to ascribe reasoning some less general functions. To this end, the argumentative theory relies on the framework of the evolution of communication. For communication to be evolutionarily stable, it has to benefit both senders and receivers (see, e.g., Maynard Smith & Harper, 2003). Senders,

however, often stand to benefit from communication that would be harmful to receivers. For instance, I would be better off if I could convince everyone to buy my books, but not everyone would be made better off by buying my books. Thus, there must exist mechanisms that ensure that communication is, in spite of this conflict of interest, beneficial for receivers on average. To keep communication beneficial for receivers, humans rely on mechanisms of epistemic vigilance that evaluate communicated information to reject harmful messages and accept beneficial ones (Sperber et al., 2010). Two important mechanisms of epistemic vigilance are plausibility checking and trust calibration. Plausibility checking pits communicated information against background knowledge and, in case of inconsistency, rejects the communicated information. For instance, if one of your junior colleagues tells you that the idea you are currently working on is flat out wrong, your first reaction might be to dismiss her opinion. The second mechanism, trust calibration, can, to some extent, bypass plausibility checking. If a sender is deemed to be particularly competent and honest, then her messages might be accepted even if they conflict with the receiver’s beliefs (for related Bayesian models of these phenomena, see, e.g. Hahn, Harris, & Corner, 2009). For instance, you might accept the negative assessment of your idea if it comes from someone you trust and who is much more knowledgeable than you in the relevant area. By contrast, if the source is untrustworthy—a colleague suspected of fraud, say—then even a message that might otherwise have been accepted can become suspicious. Both mechanisms—plausibility checking and trust calibration—ought to be conservative. To limit the costs of harmful messages, they should reject too many messages rather than accept too many. Many messages that could be beneficial for receivers are rejected because they do not pass the receivers’ plausibility check, and the sender is not trusted enough (Mercier, 2013a; Sperber et al., 2010). To achieve a finer grained discrimination of messages, senders and receivers can rely on argumentation. Senders provide reasons supporting their messages, and receivers evaluate these reasons in order to decide whether or not they should change their mind. For instance, your junior colleague might get you to change your mind if she offers good enough reasons. Reasoning would have evolved mainly to enable such argumentation: to allow senders to find arguments supporting their messages, and to allow receivers to evaluate these arguments (Mercier & Sperber, 2011). To evaluate this hypothesis about the function of reasoning, we can check whether it can account for well-established features of reasoning and whether it can make new predictions that withstand testing. 2. Producing arguments To fulfill an argumentative function, reasoning must be able to do two things: to produce arguments to convince others, and to evaluate others’ arguments in order to change one’s mind when, and only when the arguments are strong enough. To be more likely to convince others, reasoning should mostly produce arguments that support the reasoner’s position, whether directly or by attacking

the interlocutor’s position—one is unlikely to get an interlocutor to change her mind by producing arguments that go against one’s point of view or that support the interlocutor’s position. Reasoning, when it produces arguments, should thus have a myside bias. In many tasks (Mercier, in press; Nickerson, 1998), participants have been shown to look for arguments that support their initial intuition. The Wason selection task—the most studied of all reasoning task—is a good example. To be solved it requires that participants understand the logic of a simple conditional statement. When faced with this task, pragmatic mechanisms rapidly guide participants’ attention towards one of the potential answers—the one made most relevant by the conditional rule (Sperber, Cara, & Girotto, 1995; see also, e.g. Valerie A. Thompson, Evans, & Campbell, 2013). When participants start reasoning, they do not look for reasons that could support other answers; instead, they focus their attention on the intuitive answer, looking for reasons why this answer is correct (Lucas & Ball, 2005; Roberts & Newton, 2001; for other types of problems, see, e.g., V. A. Thompson, Turner, & Pennycook, 2011). This phenomenon has been described as a ‘confirmation bias.’ However, other experiments have revealed that when participants reason about something they disagree with, they look for arguments that attack or falsify this position (e.g., Edwards & Smith, 1996). For instance, when presented with the following argument Sentencing a person to death ensures that he/she will never commit another crime. Therefore, the death penalty should not be abolished. (Edwards & Smith, 1996, p. 9) anti-death penalty participants were more likely to form refutational than supportive thoughts. Pro-death penalty participants, by contrast, exhibited the standard confirmation bias, providing more supportive than refutational arguments. Individuals do not have a general tendency to confirm—a confirmation bias. Instead, they have a consistent myside bias: a tendency to find arguments that support their point of view, whether that means supporting a position they agree with, or attacking a position they disagree with. Besides the directionality of the arguments (i.e. for or against a position), another variable of argument production is the quality of argument people aim for. If the task of argument production is to convince others, then it might seem like the best solution would be to look for strong arguments, arguments that cannot be easily countered. This goal, however, is costly to achieve. To produce strong arguments, one has to anticipate how the interlocutor would react to one’s arguments, a task that is generally difficult, and sometimes impossible, as it depends on preferences and beliefs of the interlocutor to which the speaker might have no access (Mercier, Bonnier, & Trouche, in press). For instance, if Paul wants to convince Lara to go see a given movie, he might need to anticipate

the type of movie she likes, which movies she has already seen, when she is available, etc. The problems raised by finding the best way to get one’s message across are not specific to argumentation. For instance, when speakers are looking for a way to refer to someone, and they are not sure how well the interlocutor knows this person, they tend to start with the generic means of referring to someone (e.g. the person’s first name). If the interlocutor does not understand, they further specify the referent (see, e.g., Levinson, 2006). Similarly, speakers can rely on the interlocutor’s feedback to refine their arguments. A good strategy is to start with a relatively generic argument—an argument not specifically tailored to the interlocutor. If it is accepted then there was no need for a more carefully crafted argument. If the first argument is rejected, the interlocutor typically provides counter-arguments to justify her rejection. The speaker can then address these counter-arguments, a task that is much easier than anticipating them. To followup on the earlier example, Paul might start by simply saying that he has heard that the movie is great. Lara isn’t swayed and she replies that she hasn’t seen the previous episodes of the series, and so might not be able to follow the story. Although it might have been difficult for Paul to anticipate that this specific counter-argument would be used, he can attempt to address it once it has been raised—for instance by saying that he can give a brief summary of the previous episodes. Thus, the argumentative theory does not predict that people should spontaneously produce very strong arguments. In the absence of feedback, people should mostly produce relatively generic and superficial arguments. But people should also be able to take feedback into account to improve their arguments in the course of a discussion. Studies of argumentation and informal reasoning, in which participants are asked to justify their positions on various issues, have concluded that participants typically produce relatively weak arguments: they fail to anticipate simple counter-arguments, they offer circular arguments, and they do not incorporate evidence in their arguments (Kuhn, 1991; Perkins, 1989). In these studies, the participants are not confronted with interlocutors who challenge their arguments by offering counter-arguments. When participants discuss similar issues with other participants holding different positions, they produce better arguments at the end of the discussion: they present a wider range of arguments, they anticipate counter-arguments, and they offer more evidence (Crowell & Kuhn, 2014; Iordanou, 2013; Kuhn & Crowell, 2011). These two features of argument production—the myside bias and the initial production of generic arguments—make sense in dialogic contexts. In dialogic contexts, speakers face interlocutors who provide arguments for their own views, and who challenge the speakers’ arguments. By contrast, these features are problematic for the solitary reasoner who is likely to pile up unexamined arguments for her preexisting positions. Thus, when participants are faced with tasks for which their intuitions are misleading, reasoning fails to look for reasons supporting answers other than the intuitive answer, and it fails to make sure that

the reasons people find in support of the intuitive answer are sound. For instance, in the Wason selection task, not only do people mostly look for reasons supporting their initial intuition, they also fail to realize that these reasons are somehow mistaken (since they support a logically flawed answer). As a result, reasoning rarely challenges the initial, misguided intuition (Wason, 1966; see also, e.g., V. A. Thompson et al., 2011). Moreover, by accumulating reasons that support their initial intuitions, solitary reasoners can become surer that they are right, even if they are wrong (overconfidence, see Koriat, Lichtenstein, & Fischhoff, 1980), and they can come to hold more extreme views (attitude polarization, see Tesser, 1978). When people do not have an initial intuition to support—when they have weak or conflicting intuitions—reasoning has different consequences. Instead of accumulating reason for one’s initial intuition, reasoning looks for reasons supporting the competing intuitions, and it drives the reasoner towards the intuition for which reasons are most easily found. This phenomenon is generally referred to as ‘reason based choice’ (Shafir, Simonson, & Tversky, 1993; Simonson, 1989). Reason based choice does not consistently lead people towards decisions that are intrinsically superior. Instead, it leads people towards decisions that are justifiable—decisions that look rational. As a result, reason based choice can create a variety of apparently suboptimal choices (for review, see, Mercier & Sperber, 2011). For instance, reason based choice explains why people often chose items laden with features: everything else equal, it looks more rational to have more features—even if, in the end, these features prove to be cumbersome rather than useful (D. V. Thompson & Norton, 2008). If the function of reasoning were to better one’s beliefs through private ratiocination, as held by the individualist view, then reasoning should look for arguments that challenge one’s position (instead of having a myside bias), it should make sure that one’s arguments are good (instead of being satisfied with weak, generic arguments), it should correct one’s mistaken intuitions (instead of failing to do so), it should lead to better calibrated confidence (instead of overconfidence), and it should produce intrinsically better decisions (instead of decisions that look rational). By contrast, if the function of argument production is to convince others, then reasoning should have a myside bias, and it should spontaneously produce relatively weak, generic arguments. The effects of reasoning on the solitary reasoner do not mean that reasoning is flawed, simply that it is used in an abnormal environment, one that lacks the feedback others would provide in a dialogic context. 3. Evaluating others’ arguments When evaluating arguments, reasoning’s task is to determine to what extent an argument warrants changing one’s mind about the argument’s conclusion. In order not to change one’s mind for bad reasons, reasoning should therefore be exigent towards other people’s arguments—at least when they challenge one’s position. This critical evaluation should contrast with the way people treat their own arguments: as was just discussed, when people produce arguments, they

should be satisfied with relatively weak and generic arguments. People should thus be more critical of others’ arguments than they are of their own. The ideal test of this prediction involves making people evaluate their own arguments as if they were someone else’s. To this end, Trouche et al. (in press) relied on the choice blindness paradigm. Participants tackled five simple reasoning problems, for which they were first asked to produce an intuitive answer, one that does not involve reasoning. After this first phase, participants were asked to produce arguments, and offered the possibility to change their initial answer. People displayed a myside bias, and they were satisfied with relatively weak arguments: not only did only a small minority of participants decide to revise their answers, but they were not more likely to do so when their intuitive answer was invalid than when it was valid (in press). After this second phase, participants were presented with the same problems again, reminded of their previous answer, and given the answer and the argument provided by another participant. This only happened for four of the five problems. For the last problem, the participants were told that they had answered a different answer from the one they had in fact originally given, and they were provided with their own initial answer and argument as if they were someone else’s. For each problem, the participants could then decide if they wanted to change their mind or not on the basis of the argument. Debriefing questions revealed that approximately half of the participants did not notice the manipulation. Of these participants, over half rejected the argument they had deemed good enough to produce a few minutes earlier. Moreover, they were more likely to reject their own argument if it supported an invalid answer than a valid answer. They had become more critical and more discriminating because they thought the argument was someone else’s. Besides their own arguments, people should also have relatively lax criteria when evaluating arguments whose conclusion they agree with: since other mechanisms have already positively evaluated the conclusion, the risk of being misled is considerably reduced. This might explain why participants are less likely to detect that an argument is logically invalid when they agree with its conclusion (belief bias, see, Evans, Barston, & Pollard, 1983). The function of argument evaluation, however, is not merely to reject poor arguments, it is also to accept strong enough arguments. Experiments suggest that individuals, on the whole, have good argument evaluation skills, being more persuaded by strong than by weak arguments. This has been shown using several normative models for what counts as a good argument. Research in persuasion and attitude change has typically relied on informal criteria for distinguishing strong from weak arguments. For instance, weak arguments could be hearsay from unreliable sources while strong arguments could be relevant evidence from reliable sources. Contrasting these types of arguments, many experiments have shown that participants who have a stake in the arguments’ conclusion are more influenced by strong than by weak arguments (for review, see Petty & Wegener, 1998).

Other researchers have relied on norms stemming from argumentation theory, norms that specify on a case by case basis what makes arguments from a given type—argument from authority, ad hominem, etc.—fallacious or not. Generally, participants find non-fallacious arguments more persuasive than fallacious arguments (for review, see Hornikx & Hahn, 2012). For instance, when evaluating how much weight to grant an argument from authority, participants were sensitive to the authority’s expertise and to the presence of potential vested interests (Hoeken, Timmers, & Schellens, 2012). A more general normative framework for determining argument strength can be derived from Bayes’ rule. Bayes’ rule specifies how one should revise one’s belief in light of new evidence. In the case at hand, the evidence takes the form of arguments. Using this framework, it is possible to make predictions, for each argument type, about the factors that make arguments stronger or weaker (Hahn & Oaksford, 2008). For instance, a Bayesian analysis can make predictions about how much participants with different priors should change their mind on the basis of the following two arguments from ignorance (Hahn & Oaksford, 2007, p. 708): Drug A is not toxic because no toxic effects were observed in 50 tests. Drug A is not toxic because no toxic effects were observed in 1 test. On the whole, participants evaluate arguments in the way predicted by the Bayesian framework (for review, see, Collins & Hahn, in press)— in this example, by granting more weight to the former argument than the latter argument. The traits of argument evaluation stand in sharp contrast with those of argument production. People produced biased, superficially examined arguments. When they evaluate others’ arguments, they are critical enough to reject weak arguments, and objective enough to accept strong arguments. That participants are able to evaluate others’ arguments in this way, and yet fail to submit their own arguments to the same treatment shows that the traits of argument production are not a mere cognitive limitation. They are genuine features—those expected of a mechanism dedicated to argumentation. 4. Reasoning in discussion The last two sections have focused on studies of reasoning in isolated participants, whether they were asked to produce arguments or to evaluate them. These studies allow for better control: for instance, it is possible to precisely vary the arguments people have to evaluate. However, they study reasoning in an environment that is, according to the argumentative theory, not reasoning’s normal environment. This purportedly explains why, in these tasks, reasoning consistently fails to correct participants’ misguided intuitions. By contrast, when reasoning is used in the back and forth of a discussion, it should produce epistemically sounder results. Each individual should be able to find arguments supporting their position, and to examine others’ arguments. The individuals should improve their arguments by taking counter-arguments into

account. When strong enough arguments are exchanged, individuals should change their minds to adopt better supported positions, which should usually mean holding better beliefs. In simple logical and mathematical tasks, the correct answer can be supported with arguments that most participants are able to understand. Thus, in these tasks the correct answer should spread: as soon as a group member has found the correct answer, or a piece of the correct answer, she should be able to convince her peers. This superiority of the correct answer is known as ‘truth wins,’ and it has been observed in various logical and mathematical tasks (for review, see Laughlin, 2011). For instance, when participants had to solve the Wason selection task on their own and then in groups, performance jumped from 21% correct answer after solitary reasoning to 79% after the discussion (Moshman & Geil, 1998). For more complex tasks, in which a single group member is unlikely to have found the whole correct answer on her own, groups can perform even better. In the course of the discussion, the elements of the correct answer can be taken from different participants and assembled to reach a solution better than that reached by any individual member (e.g. Laughlin, Bonner, & Miner, 2002). Many problems, however, do not have an easily demonstrated answer. Still, as long as better answers can be supported by better arguments, group discussion should yield, on average, better answers than those following individual reasoning. This improvement has been observed on a wide variety of tasks, such as induction tasks (Laughlin, VanderStoep, & Hollingshead, 1991), numerical estimations (Minson, Liberman, & Ross, 2011; Sniezek & Henry, 1989), and several others (Laughlin, 2011). The gap in performance between individual reasoning and reasoning after discussion is difficult to deny. However, this gap could be caused by other processes besides argumentation. Simple means of opinion aggregation, which do not require the exchange of arguments, often lead to improved performance as well: following the majority opinion (R. Hastie & Kameda, 2005), following the most confident group member (Koriat, 2012), or averaging between opinions (Soll & Larrick, 2009). However, it can be shown that in many cases argumentation plays a role beyond these simpler means of aggregation. Argumentation can beat following the majority or the most confident group member. A group member who has the correct answer to a logical or mathematical problem can convince the whole group, even if the other group members all agree on the same wrong answer, and even if she is not the most confident group member (Trouche, Sander, & Mercier, 2014). It has also been shown that discussion can improve performance beyond the simple averaging of opinions (Minson et al., 2011). Thus, argumentation often outperforms other ‘wisdom of crowds’ mechanisms—its main limitation is that it is harder to scale up than voting for instance.

Although discussion has been shown to lead to better performance for a wide range of tasks, it can also have detrimental effects. The best known is that discussion can lead group members to polarize: to develop stronger views of the topic under discussion (for review, see, Isenberg, 1986). For instance, take participants who agree that, in a base rate problem, base rates should be neglected, and make them talk with each other: they will tend to ignore the base rates even more after the discussion. The opposite happens if they all agreed that base rates should be taken into account (Hinsz, Tindale, & Nagao, 2008). Group polarization can be explained by the properties of reasoning described above. When group members are made to discuss a topic they all agree on, they offer arguments, but they should not be expected to be very critical of each other’s arguments, since they agree on the arguments’ conclusions. As the group members now think that there are more reasons supporting their beliefs, they develop stronger beliefs (see Vinokur & Burnstein, 1978). This is similar to the process of attitude polarization that takes place when people reason on their own. In both cases, polarization happens because the arguments produced are not critically evaluated—because there is no audience in the case of solitary reasoning, or because the audience agrees with the arguments’ conclusion in the case of group discussion. 5. Developmental and cross-cultural data I have argued that there is a strong link between reasoning and argumentation— more specifically, that argumentation is the main function of reasoning. This hypothesis can account for the traits and effects of reasoning. However, almost all the evidence relative to these traits and features has been gathered in WEIRD populations (Western Educated Industrialized Rich Democratic, see Henrich, Heine, & Norenzayan, 2010). It has been claimed that these populations are, in some respects, different from other populations—for instance, they are more individualistic. The features of reasoning reviewed so far could also be specific to WEIRD participants. Argumentation is central to the most important Western institutions, from politics to law and science. WEIRD cultures tend to be diverse, confronting individuals with people who have different views on nearly every topic, which might create more opportunities—real or anticipated—for the exchange of arguments. WEIRD parents, particularly middle and upper class ones, provide reasons to their children, and expect children to provide and ask for reasons (e.g. Tizard, Hughes, Carmichael, & Pinkerton, 1983). By contrast, other cultures—some Eastern cultures in particular—have a more negative view of argumentation, which can be seen as a threat to social harmony (Becker, 1986; Nakamura, 1964). Members of traditional populations have a higher proportion of shared beliefs, and thus less pressure to constantly justify their views. They expect their children to comply (Maratsos, 2007), and the children know not to question their parents (Gauvain, Munroe, & Beebe, 2013). It is thus possible that the features of reasoning reviewed above result from a specific set of cultural factors rather than universal selection pressures. To argue against this possibility, I briefly review evidence suggesting that these features

are both universal and early developing, two lines of argument that suggest they are not culturally acquired. The ability to produce arguments is universal. Even the speakers of Pirahá—a language that has been claimed to lack words or markers for conditionals, disjunctions, conjunctions, comparatives, or quantifiers (Everett et al., 2005)— produce arguments (Everett, 2008; Everett et al., 2005). No culture has been reported in which people would have a natural proclivity to find arguments for other people’s point of view, or in which they would spontaneously be able to produce very strong arguments. Experimental evidence shows that individual reasoning exhibits the same failures in all the cultures tested (e.g. Castelain, Girotto, Jamet, & Mercier, submitted; Dasen, 1972; Yama, 2001). Children start to produce arguments very early—as soon as 2 years of age in some cases (for review, see Mercier, 2011a). Preschoolers are already able to take common ground into account to decide whether they should produce arguments or not (Köymen, Rosenbaum, & Tomasello, 2014). The arguments they produce are biased to support their point of view (Köymen et al., 2014; Ross, Smith, Spielmacher, & Recchia, 2004). Thus the traits of argument production seem to be universal and to develop early. The ability to soundly evaluate arguments is more difficult to assess from observational data—the main source of data available for non-WEIRD cultures and young children. Still, the observational data suggests decent argument evaluation skills. A few ethnographies have offered details of debates in traditional populations, and they suggest that the participants were convinced only by good arguments (see in particular Hutchins, 1980). The parenting literature suggests that using good reasons in addressing children leads to more compliance (Grusec & Goodnow, 1994). The results from the few experimental studies available also support the existence of good argument evaluation skills. Participants in a traditional Maya population were more likely to accept the arguments supporting the correct answer to a reasoning problem than those supporting the wrong answer (Castelain et al., submitted). Preschoolers are more likely to endorse testimony supported by a strong, perceptual argument than testimony supported by a weak, circular argument (Mercier, Bernard, & Clément, 2014; see also, Corriveau & Kurkul, 2014). Finally, group discussion has the potential to dramatically improve reasoning performance in non-WEIRD cultures. Besides observational and anecdotal evidence (Boehm et al., 1996; Cole, Gay, Glick, & Sharp, 1971; Hutchins, 1980), a few experimental studies have replicated the improvement following group discussion usually observed in WEIRD cultures in two different cultures: Japan (Mercier, Deguchi, Van der Henst, & Yama, in press) and traditional Maya populations in Guatemala (Castelain et al., submitted). Moreover, children as young as 5 years of age have also been shown to give better answers to reasoning problems after discussion with a peer (Doise & Mugny, 1984; PerretClermont, 1980).

Obviously, this does not mean that there is no difference in how people reason and argue in different cultures (see, e.g., Buchtel & Norenzayan, 2009; Mercier, 2013b). For instance, the members of each culture learn when argumentation is most appropriate and what type of argument is more effective in each specific context. Very little work has been devoted to understanding how these differences emerge, and this will constitute a fascinating topic for future research (Mercier, submitted). 6. Reasoning and argumentation outside the laboratory With the exception of the last section, the results reviewed so far have been mostly gathered in the confined setting of the laboratory, using typical student populations, but the same patterns are observed in a variety of other contexts. Ethnographic (Dunbar, 1995) and experimental (Mahoney, 1977) studies of scientists show that they have a myside bias. Argumentation in science is typically very efficient: from the micro-level of the lab meeting (Dunbar, 1995) to the macro-level at which new theories spread (Cohen, 1985), argumentation allows scientists to change their minds for the best (Mercier & Heintz, 2014). Thus reasoning in science is not an exception to the patterns reported above. The same patterns are observed among other types of experts. For instance, when experts in political science make forecasts on their own, individual reasoning tends to make them overconfident (Tetlock, 2005). Making experts discuss with each other mitigates these issues and allows for better forecasts (Mellers et al., 2014; Rowe & Wright, 1996; for other domains of expertise, see, Mercier, 2011c). In education, collaborative—or cooperative—learning has the potential to dramatically improve reasoning performance: by making students articulate justifications for their answers, and evaluate each other’s justifications, these pedagogical tools can not only allow students to adopt correct answers, but also to reach a deeper understanding of the concepts involved (see, e.g., Slavin, 1995). Argumentation has the potential to yield sounder beliefs and better decisions even in contexts which are fraught with emotions, such as juries attempting to reach a verdict (Ellsworth, 1989; Reid Hastie, Penrod, & Pennington, 1983), discussions of moral dilemmas (see, Mercier, 2011b), or citizens discussing policy issues (see, e.g., Fishkin, 2009; Mercier & Landemore, 2012). This very brief review shows that the main features of reasoning—myside bias in argument production, good ability to evaluate arguments, improvement in performance yielded by discussion—are found not only in the laboratory, but also in all the ‘outside world’ contexts for which data are available. 7. Conclusion The individualist view of reasoning, according to which solitary reasoning aims at, and is able to deliver epistemic and practical improvements, has a strong hold on our culture. It is manifest in the solitary genius view of science (Shapin,

1991), in the trepidation with which deliberations between citizens is perceived (Sunstein, 2002), and in the resistance to the use of argumentation in many other contexts—collaborative learning in schools, work teams, etc. Even experts dramatically underestimate the benefits of argumentation. For instance, when asked to estimate how many people would be able to solve the Wason selection task on their own and in small groups, people believed that group discussion would provide little or no benefit. Even psychologists of reasoning underestimated the benefits of group discussion by a factor of two (Mercier, Trouche, Yama, Heintz, & Girotto, in press). The results reviewed here strongly suggest that the individualist view of reasoning is mistaken. Reasoning’s features make much more sense when it is understood as a social, and more specifically argumentative, mechanism. This has both scientific and practical implications. From a scientific perspective this suggests that more attention should be paid to how reasoning works in social settings. As mentioned above, this attention shift is already under way, and one can only hope that it will gather pace in the following years. From a practical perspective, the mismatch between the popular individual view of reasoning and the arguably more accurate social view has to be addressed, and institutions should be fostered that put reasoning back in its normal social context, thus making the best of it. References Becker, C. B. (1986). Reasons for the lack of argumentation and debate in the Far East. International Journal of Intercultural Relations, 10(1), 75–92. Billig, M. (1996). Arguing and Thinking: A Rhetorical Approach to Social Psychology. Cambridge: Cambridge University Press. Boehm, C., Antweiler, C., Eibl-Eibesfeldt, I., Kent, S., Knauft, B. M., Mithen, S., … Wilson, D. S. (1996). Emergency Decisions, Cultural-Selection Mechanics, and Group Selection [and Comments and Reply]. Current Anthropology, 37(5), 763–793. Buchtel, E. E., & Norenzayan, A. (2009). Thinking across cultures: Implications for dual processes. In J. S. B. T. Evans & K. Frankish (Eds.), In Two Minds. New York: Oxford University Press. Castelain, T., Girotto, V., Jamet, F., & Mercier, H. (submitted). Evidence for core features of reasoning in a Mayan indigenous population.

Cohen, I. B. (1985). Revolution in science. Cambridge: Harvard University Press. Cole, M., Gay, J., Glick, J. A., & Sharp, D. W. (1971). The cultural context of learning and thinking. New York: Basic Books. Collins, P., & Hahn, U. (in press). Informal Argument Fallacies. In L. J. Ball & V. A. Thompson (Eds.), International Handbook of Thinking and Reasoning. London: Psychology Press. Corriveau, K. H., & Kurkul, K. E. (2014). “Why does rain fall?”: Children prefer to learn from an informant who uses noncircular explanations. Child Development, 85(5), 1827–1835. Cosmides, L., & Tooby, J. (1992). Cognitive adaptations for social exchange. The Adapted Mind: Evolutionary Psychology and the Generation of Culture, 163–228. Crowell, A., & Kuhn, D. (2014). Developing dialogic argumentation skills: A 3year intervention study. Journal of Cognition and Development, 15(2), 363–381. Dasen, P. R. (1972). Cross-cultural Piagetian research: A summary. Journal of Cross-Cultural Psychology, 3(1), 23–40. Descartes, R. (1985). The Philosophical Writings of Descartes. (J. Cottingham, R. Stoothoff, & D. Murdoch, Trans.) (Vol. 1). Cambridge: Cambridge University Press. Doise, W., & Mugny, G. (1984). The Social Development of the Intellect. Oxford: Pergamon Press. Ducrot, O., & Anscombre, J. C. (1983). L’argumentation dans la langue. Bruxelles: Mardaga.

Dunbar, K. (1995). How scientists really reason: Scientific reasoning in realworld laboratories. In R. J. Sternberg & Davidson, J.E. (Eds.), The nature of insight (pp. 365–395). Cambridge: MIT Press. Dutilh Novaes, C. (submitted). A dialogical, multi-agent account of the normativity of logic. Edwards, K., & Smith, E. E. (1996). A disconfirmation bias in the evaluation of arguments. Journal of Personality and Social Psychology, 71, 5–24. Ellsworth, P. C. (1989). Are twelve heads better than one? Law and Contemporary Problems, 205–224. Evans, J. S. B. T. (2002). Logic and human reasoning: an assessment of the deduction paradigm. Psychological Bulletin, 128(6), 978–996. Evans, J. S. B. T., Barston, J. L., & Pollard, P. (1983). On the conflict between logic and belief in syllogistic reasoning. Memory and Cognition, 11, 295–306. Everett, D. L. (2008). Don’t Sleep there are Snakes. New York: Pantheon Books. Everett, D. L., Berlin, B., Goncalves, M. A., Kay, P., Levinson, S. C., Pawley, A., … Everett, D. L. (2005). Cultural constraints on grammar and cognition in Piraha. Current Anthropology, 46(4), 621–646. Fishkin, J. S. (2009). When the People Speak: Deliberative Democracy and Public Consultation. Oxford: Oxford University Press. Gauvain, M., Munroe, R. L., & Beebe, H. (2013). Children’s questions in crosscultural perspective a four-culture study. Journal of Cross-Cultural Psychology, 44(7), 1148–1165. Gibbard, A. (1990). Wise Choices, Apt Feelings. Cambridge: Cambridge University Press.

Gigerenzer, G., Todd, P. M., & ABC Research Group. (1999). Simple Heuristics That Make Us Smart. Oxford: Oxford University Press. Grusec, J. E., & Goodnow, J. J. (1994). Impact of parental discipline methods on the child’s internalization of values: A reconceptualization of current points of view. Developmental Psychology, 30(1), 4–19. Hahn, U., Harris, A. J. L., & Corner, A. (2009). Argument content and argument source: An exploration. Informal Logic, 29(4), 337–367. Hahn, U., & Hornikx, J. (Eds.). (2012). Reasoning and Argumentation. A Special Issue of Thinking and Reasoning. London: Psychology Press. Hahn, U., & Oaksford, M. (2007). The rationality of informal argumentation: A bayesian approach to reasoning fallacies. Psychological Review, 114(3), 704–732. Hahn, U., & Oaksford, M. (2008). A normative theory of argument strength. Informal Logic, 26(1), 1–24. Hastie, R., & Kameda, T. (2005). The robust beauty of majority rules in group decisions. Psychological Review, 112(2), 494–50814. Hastie, R., Penrod, S., & Pennington, N. (1983). Inside the Jury. Cambridge: Harvard University Press. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world. Behavioral and Brain Sciences, 33(2-3), 61–83. Herman, T., & Oswald, S. (Eds.). (2014). Rhetoric and Cognition. Theoretical Perspectives and Persuasive Strategies. Berne: Peter Lang. Hinsz, V. B., Tindale, R. S., & Nagao, D. H. (2008). Accentuation of information processes and biases in group judgments integrating base-rate and case-

specific information. Journal of Experimental Social Psychology, 44(1), 116–126. Hoeken, H., Timmers, R., & Schellens, P. J. (2012). Arguing about desirable consequences: What constitutes a convincing argument? Thinking & Reasoning, 18(3), 394–416. Hornikx, J., & Hahn, U. (2012). Reasoning and argumentation: Towards an integrated psychology of argumentation. Thinking & Reasoning, 18(3), 225–243. Hutchins, E. (1980). Culture and Inference. Cambridge, Massachusetts: MIT Press. Iordanou, K. (2013). Developing Face-to-Face Argumentation Skills: Does Arguing on the Computer Help? Journal of Cognition and Development, 14(2), 292–320. Isenberg, D. J. (1986). Group polarization: A critical review and meta-analysis. Journal of Personality and Social Psychology, 50(6), 1141–1151. Koriat, A. (2012). When Are Two Heads Better Than One and Why? Science, 336(6079), 360–362. Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory and Cognition, 6, 107–118. Köymen, B., Rosenbaum, L., & Tomasello, M. (2014). Reasoning during joint decision-making by preschool peers. Cognitive Development, 32, 74–85. Kuhn, D. (1991). The Skills of Arguments. Cambridge: Cambridge University Press. Kuhn, D., & Crowell, A. (2011). Dialogic argumentation as a vehicle for developing young adolescents’ thinking. Psychological Science, 22(4), 545.

Laughlin, P. R. (2011). Group Problem Solving. Princeton: Princeton University Press. Laughlin, P. R., Bonner, B. L., & Miner, A. G. (2002). Groups perform better than the best individuals on letters-to-numbers problems. Organizational Behavior and Human Decision Processes, 88, 605–620. Laughlin, P. R., VanderStoep, S. W., & Hollingshead, A. B. (1991). Collective versus individual induction: Recognition of truth, rejection of error, and collective information processing. Journal of Personality and Social Psychology, 61, 50–67. Levinson, S. C. (2006). On the human “interaction engine.” In N. J. Enfield & S. C. Levinson (Eds.), Roots of Human Sociality (pp. 39–69). Oxford: Berg. Lucas, E. J., & Ball, L. J. (2005). Think-aloud protocols and the selection task: Evidence for relevance effects and rationalisation processes. Thinking and Reasoning, 11, 35–66. Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161–175. Maillat, D., & Oswald, S. (Eds.). (2013). Biases and Constraints in Communication: Argumentation, Persuasion and Manipulation. Special issue of the Journal of Pragmatics. Amsterdam: Elsevier. Maratsos, M. P. (2007). Commentary. Monographs of the Society for Research in Child Development, 72, 121–126. Maynard Smith, J., & Harper, D. (2003). Animal Signals. Oxford: Oxford University Press.

Mellers, B., Ungar, L., Baron, J., Ramos, J., Gurcay, B., Fincher, K., … others. (2014). Psychological strategies for winning a geopolitical forecasting tournament. Psychological Science, 25(5), 1106–1115. Mercier, H. (in press). Confirmation (or myside) bias. In R. Pohl (Ed.), Cognitive Illusions (2nd ed.). London: Psychology Press. Mercier, H. (submitted). Reasoning and argumentation. In H. Callan (Ed.), International Encyclopedia of Anthropology. London: Wiley-Blackwell. Mercier, H. (2011a). Reasoning serves argumentation in children. Cognitive Development, 26(3), 177–191. Mercier, H. (2011b). What good is moral reasoning? Mind & Society, 10(2), 131– 148. Mercier, H. (2011c). When experts argue: explaining the best and the worst of reasoning. Argumentation, 25(3), 313–327. Mercier, H. (2013a). Our pigheaded core: How we became smarter to be influenced by other people. In B. Calcott, R. Joyce, & K. Sterelny (Eds.), Evolution, Cooperation, and Complexity. Cambridge: MIT Press. Mercier, H. (Ed.). (2013b). Recording and explaining cultural differences in argumentation -- Special issue of the Journal of Cognition and Culture (Vol. 13). Leiden: Brill. Mercier, H., Bernard, S., & Clément, F. (2014). Early sensitivity to arguments: How preschoolers weight circular arguments. Journal of Experimental Child Psychology, 125, 102–109. Mercier, H., Bonnier, P., & Trouche, E. (in press). Why don’t people produce better arguments? In L. Macchi, M. Bagassi, & R. Viale (Eds.), The Language of Thought. Cambridge: MIT Press.

Mercier, H., Deguchi, M., Van der Henst, J.-B., & Yama, H. (in press). The benefits of argumentation are cross-culturally robust: The case of Japan. Thinking & Reasoning. Mercier, H., & Heintz, C. (2014). Scientists’ argumentative reasoning. Topoi, 33(2), 513–524. Mercier, H., & Landemore, H. (2012). Reasoning is for arguing: Understanding the successes and failures of deliberation. Political Psychology, 33(2), 243–258. Mercier, H., & Sperber, D. (2009). Intuitive and reflective inferences. In J. S. B. T. Evans & K. Frankish (Eds.), In Two Minds (pp. 149–170). New York: Oxford University Press. Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74. Mercier, H., Trouche, E., Yama, H., Heintz, C., & Girotto, V. (in press). Experts and laymen grossly underestimate the benefits of argumentation for reasoning. Thinking & Reasoning. Minson, J. A., Liberman, V., & Ross, L. (2011). Two to Tango. Personality and Social Psychology Bulletin, 37(10), 1325–1338. Moshman, D., & Geil, M. (1998). Collaborative reasoning: Evidence for collective rationality. Thinking and Reasoning, 4(3), 231–248. Nakamura, H. (1964). Ways of Thinking of Eastern Peoples: India, China, Tibet, Japan. Hawaii: University of Hawaii Press. Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomena in many guises. Review of General Psychology, 2, 175–220.

Perelman, C., & Olbrechts-Tyteca, L. (1958). The New Rhetoric: A Treatise on Argumentation. Notre Dame, IN: University of Notre Dame Press. Perkins, D. N. (1989). Reasoning as it is and could be: An empirical perspective. In D. M. Topping, D. C. Crowell, & V. N. Kobayashi (Eds.), Thinking Across Cultures: The Third International Conference on Thinking (pp. 175–194). Hillsdale, NJ: Erlbaum. Perret-Clermont, A.-N. (1980). Social Interaction and Cognitive Development in Children. London: Academic Press. Petty, R. E., & Wegener, D. T. (1998). Attitude change: Multiple roles for persuasion variables. In D. T. Gilbert, S. Fiske, & G. Lindzey (Eds.), The Handbook of Social Psychology (pp. 323–390). Boston: McGraw-Hill. Piaget, J. (1928). Judgment and Reasoning in the Child. London: Routledge and Kegan Paul. Roberts, M. J., & Newton, E. J. (2001). Inspection times, the change task, and the rapid response selection task. Quarterly Journal of Experimental Psychology, 54, 1031–1048. Ross, H., Smith, J., Spielmacher, C., & Recchia, H. (2004). Shading the truth SelfServing Biases in Children’s Reports of Sibling Conflicts. Merrill-Palmer Quarterly, 50(1), 61–86. Rowe, G., & Wright, G. (1996). The impact of task characteristics on the performance of structured group forecasting techniques. International Journal of Forecasting, 12(1), 73–89. Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-based choice. Cognition, 49(1-2), 11–36.

Shapin, S. (1991). “The mind is its own place”: science and solitude in seventeenth-century England. Science in Context, 4(01), 191–218. Simonson, I. (1989). Choice based on reasons: The case of attraction and compromise effects. The Journal of Consumer Research, 16(2), 158–174. Slavin, R. E. (1995). Cooperative Learning: Theory, Research, and Practice (Vol. 2nd). London: Allyn and Bacon. Sniezek, J. A., & Henry, R. A. (1989). Accuracy and confidence in group judgment. Organizational Behavior and Human Decision Processes, 43(1), 1–28. Soll, J. B., & Larrick, R. P. (2009). Strategies for revising judgment: How (and how well) people use others’ opinions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(3), 780–805. Sperber, D., Cara, F., & Girotto, V. (1995). Relevance theory explains the selection task. Cognition, 57, 31–95. Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind and Language, 25(4), 359–393. Stanovich, K. E. (2004). The Robot’s Rebellion. Chicago: Chicago University Press. Sunstein, C. R. (2002). The law of group polarization. Journal of Political Philosophy, 10(2), 175–195. Tesser, A. (1978). Self-generated attitude change. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (pp. 289–338). New York: Academic Press. Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know? Princeton: Princeton University Press.

Thompson, D. V., & Norton, M. I. (2008). The social utility of feature creep. In A. Lee & D. Soman (Eds.), Advances in Consumer Research (pp. 181–184). Duluth, MN: Association for Consumer Research. Thompson, V. A., Evans, J. S. B., & Campbell, J. I. (2013). Matching bias on the selection task: It’s fast and feels good. Thinking & Reasoning, 19(3-4). Thompson, V. A., Turner, J. A. P., & Pennycook, G. (2011). Intuition, reason, and metacognition. Cognitive Psychology, 63(3), 107–140. Tizard, B., Hughes, M., Carmichael, H., & Pinkerton, G. (1983). Language and social class: Is verbal deprivation a myth? Journal of Child Psychology and Psychiatry, 24(4), 533–542. Trouche, E., Johansson, P., Hall, L., & Mercier, H. (in press). The selective laziness of reasoning. Cognitive Science. Trouche, E., Sander, E., & Mercier, H. (2014). Arguments, more than confidence, explain the good performance of reasoning groups. Journal of Experimental Psychology: General, 143(5), 1958–1971. Vinokur, A., & Burnstein, E. (1978). Novel argumentation and attitude change: The case of polarization following group discussion. European Journal of Social Psychology, 8(3), 335–348. Wason, P. C. (1966). Reasoning. In B. M. Foss (Ed.), New Horizons in Psychology: I (pp. 106–137). Harmandsworth, England: Penguin. Yama, H. (2001). Matching versus optimal data selection in the Wason selection task. Thinking & Reasoning, 7(3), 295–311.

Reasoning and argumentation Hugo Mercier NON ...

Journal of Personality and Social Psychology, 50(6), 1141–1151. Koriat, A. (2012). When Are Two Heads Better Than One and Why? Science,. 336(6079), 360–362. Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory and Cognition,.

203KB Sizes 1 Downloads 167 Views

Recommend Documents

Hugo Mercier - CI 2016.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Hugo Mercier ...

Articles - COGENCY | Journal of Reasoning and Argumentation
plex argumentation that are based on methods of legal interpretation and on the application of specific legal argument forms such as analogy argu- mentation, a contrario argumentation, teleological-evaluative argumenta- tion and argumentation from un

Reasoning serves argumentation in children
that children possess basic argument skills, (ii) that they are able reap the benefits of ... adults when they use reasoning in the wrong contexts. ..... for argumentation but that it still needs to develop to reach a stage of adult competency. ....

Cogency v2 n2 - COGENCY | Journal of Reasoning and Argumentation
tograph. The photograph also seems to show another very theme associated ... Add to this his ex- .... themes that inform his later writings” (p.165) was Ramsey.

Cogency v2 n2 - COGENCY | Journal of Reasoning and Argumentation
tograph. The photograph also seems to show another very theme associated ... Add to this his ex- .... themes that inform his later writings” (p.165) was Ramsey.

Read PDF The Enigma of Reason Full By Hugo Mercier
... of Reason ,kindle The Enigma of Reason ,epub creator The Enigma of Reason ... Reason ,free kindle app The Enigma of Reason ,epub website The Enigma ...

Non-Axiomatic Reasoning System (Version 2.2)
Apr 14, 1993 - Indiana University. 510 N. Fess ..... In this way, it is possible to provide a uni ed rep- ..... previous study of induction, both in the eld of ma-.

Non-Axiomatic Reasoning System (Version 2.2)
Apr 14, 1993 - cidable system (I call it a full-axiomatic system). It has a set of axioms and a ...... system is waiting for new tasks, as in Turing Machine and other ...

Non-Axiomatic Reasoning System (Version 2.2)
14 Apr 1993 - 3.1 Term-oriented language. Traditionally, the meaning of a formal language L is provided by a model-theoretic semantics, where a term a in L indicates an object A in a domain D, and a predicate P in L indicates a property p in D. A pro

Kevin Mercier
Full understanding of every Adobe Creative Suite product. (Maybe not fireworks ... from online poll worker training to voter education portals. Created a modular ...

arguments of interpretation and argumentation ... - Fabrizio Macagno
cal control of the sock at some point before that discovery was made. In this case .... In this case the scientific law governing the velocity of bullets does not need ...

Non-Axiomatic Reasoning System (NARS) solving ... -
Jun 26, 2015 - Daniel took the apple. ..... This one involves temporal induction based on ... This is a possible base hypothesis to answer the first question, ...

Non-Axiomatic Reasoning System (Version 4.1)
NARS (Non-Axiomatic Reasoning System) is an intelligent reasoning system. .... Since by definition S ∈ P is identical to {S} ⊂ P, rules on the “∈” relation can be ... There is also a file for download, which contains both the code and the .

Non-monotonic Reasoning Supporting Wireless Sensor ...
Sensor Networks for Intelligent Monitoring: The SINDI System ... a monitoring system that is able to reason about gathered data and support decisions. Most of the .... Application-wise, given the high variability among trials and studies address-.

Non-Axiomatic Reasoning System | Exploring the ...
Aug 30, 1995 - fillment of the requirements of the degree of Doctor of Philosophy. Doctoral ... undergraduate student in the Department of Computer Science and Technology at ... During the summer vacation that year, a classmate, Yan Yong, persuaded .

Non Verbal Reasoning Study guide with exercises.pdf
first and five in number while answer figures are. after and five in number. ... 1800. Angle in the rotation on the basis of anti- clockwise direction is as shown ...

Non-Axiomatic Reasoning System (NARS) solving the Facebook AI ...
Jun 26, 2015 - solving the Facebook AI Research bAbI tasks. Patrick Hammer ..... . Daniel picks up the milk. ({milk} --> (/,hold,{daniel},_)). :|: What is ...

arguments of interpretation and argumentation ... - Fabrizio Macagno
the State, provides a sufficient basis to uphold the constitutionality of the marriage statutes. Any change to the bedrock principle that limits marriage to persons of the opposite sex, the State argues, must come from the democratic process. This ch

Argumentation in Science
Collaboration. Groups should place the cards in order based on. The magnitude of impact on the world in the year they were invented. Use Consensus to ...

Hugo Franco_Guidelines and Suggestions for Balloon Gondola ...
There was a problem previewing this document. Retrying. ... to open or edit this item. Hugo Franco_Guidelines and Suggestions for Balloon Gondola Design.pdf.

Hugo and Russell's Pharmaceutical Microbiology.pdf
Page 1 of 494. Hugo and Russell's. Pharmaceutical. Microbiology. EDITED BY. Stephen P Denyer. B Pharm PhD FRPharmS. Welsh School of Pharmacy.