Using  evolutionary  thinking  to  cut  across  disciplines:     the  example  of  the  argumentative  theory  of  reasoning        

Hugo  Mercier       Philosophy,  Politics  and  Economics  Program   University  of  Pennsylvania   313  Cohen  Hall   249  South  36th  Street   Philadelphia,  PA  19104   [email protected]   http://sites.google.com/site/hugomercier/       Draft  –  not  to  be  quoted     Prepared  for:     Zentall,  T.  &  Crowley,  P.  (Eds.)  Comparative  Decision  Making.     Oxford  University  Press.          

Psychology  often  has  a  strange  way  to  divide  labor  between  its  sub-­‐fields.  The  very   same  psychological  mechanisms  can  be  studied  by  different  academic  groups  that   rely  on  different  assumptions  and  reach  opposite  results.  Moreover,  different  sub-­‐ fields  are  also  apt  to  carve  up  the  mind  in  different  ways,  which  sometimes  seem   based  more  on  introspection  than  on  principled  theory.  As  a  result  communication   between  these  sub-­‐fields—to  compare  results  for  instance—is  bound  to  be  fraught   with  difficulties.    

Reasoning  is  a  good  case  in  point.  If  reasoning  is  defined  loosely,  as  an  

effortful,  conscious  mechanism  of  reflection  that  allows  the  use  of  formal  rules,  then   one  soon  realizes  that  it  has  been  studied  by  the  psychology  of  reasoning,  but  also   under  the  umbrella  of  judgment  and  decision  making,  social  psychology,  moral   psychology,  cross-­‐cultural  psychology,  developmental  psychology,  and  doubtlessly   several  other  disciplines.  Yet  there  has  been  little  cross-­‐fertilization:  the  conclusions   painstakingly  reached  in  one  field  often  fail  to  reach  other  disciplines.    

Psychologists  of  reasoning  have  uncovered  severe  deficiencies:  when  

confronted  with  logical  tasks,  people  try  to  reason  but  they  often  stumble  and  fail  to   solve  even  trivial  problems.  Yet  reasoning  can  also  allow  people  to  solve  the  very   same  problems—if  people  are  smart  enough,  if  they  have  enough  time  (Evans,   2006)  and,  more  importantly,  if  they  have  the  right  knowledge  (Stanovich  &  West,   1999).  Similar  conclusions  have  been  reached  in  the  study  of  decision  making  in   which  more  than  30  years  of  ‘heuristics  and  biases’  have  unearthed  a  wealth  of   decision  making  mistakes  (Gilovich,  Griffin,  &  Kahneman,  2002).  Yet  here  as  well,   reasoning  is  hailed  as  the  solution  to  get  people  out  of  these  cognitive  traps   (Kahneman,  2003).    

The  study  of  argumentation—as  we  will  see,  a  crucial  aspect  of  reasoning—

seems  to  have  reached  even  more  conflicting  conclusions:  “Two  facts  about   argumentation  seem  beyond  dispute:  (1)  young  children  are  good  at  it  (Mercier,  in   press-­‐a);  and  (2)  adolescents  and  adults  are  bad  at  it  (Kuhn,  2005,  2009).”   (Moshman,  2011,  p.X)  Although  I  will  later  dispute  point  (2)  (pace  Moshman),  the   field  is  clearly  ambivalent  regarding  reasoning’s  performance.    Equally  conflicting   are  the  conclusions  of  social  psychologists.  While  some  join  the  psychologists  of  

reasoning  and  decision  making  in  seeing  reasoning  as  a  way  to  correct  biases   (Gilbert,  Pelham,  &  Krull,  1988),  others  emphasize  the  detrimental  aspect  it  can   have  on  our  decisions  (Dijksterhuis,  2004;  Kunda,  1990;  T.  D.  Wilson  et  al.,  1993).    

Over  the  past  years,  moral  psychology  has  gone  from  seeing  reasoning  in  an  

overwhelmingly  positive  light  (Kohlberg,  1987;  Piaget,  1997)  to  a  serious   skepticism,  stressing  that  reasoning  is  often  used  merely  to  provide  post-­‐hoc   justifications  for  intuitive  judgments  (Haidt,  2001).  Similarly,  cross-­‐cultural   psychology  went  from  casting  reasoning  as  a  nearly  unequivocal  good  (Cole,  1971;   Luria,  1934)  to  a  more  neutral  position  that  sees  different  ways  of  reasoning  as   being  equally  successful,  but  at  different  tasks  (Buchtel  &  Norenzayan,  2009).    

Before  trying  to  account  for  this  bewildering  pattern  of  performance,  we  

must  first  make  sure  that  different  disciplines  are  referring  to  the  same  cognitive   mechanisms.  Happily,  many  domains  of  psychology  seem  to  be  converging  on  dual   process  theories  of  the  mind  (for  review,  see  Evans,  2008).  Dual  process  models   divide  cognitive  mechanisms  in  two  categories.  The  bulk  of  these  mechanisms   belong  to  ‘system  1’,  which  is  characterized  as  being  automatic  and  relying  on   associations  and  heuristics.  System  1  processes—or  intuitions—tend  to  work   quickly  and  to  require  little  effort,  as  they  do  not  usually  rely  on  working  memory   (Evans,  2003).  By  contrast,  system  2  mechanisms  are  said  to  be  controlled  and   based  on  rules,  but  also  slow  and  effortful,  taxing  working  memory.  They  are  usually   thought  to  be  conscious.   When  the  different  disciplines  mentioned  above  talk  about  reasoning,  they   refer  to  some  system  2  mechanism.  So  the  disagreements  over  reasoning’s   performance  are  unlikely  to  stem  from  a  severe  misunderstanding  about  what   reasoning  is.  An  alternative  explanation  is  a  disagreement  over  the  function  of   reasoning.  If  one  group  of  researchers  thought  that  feet  were  designed  to  walk,   while  another  defended  the  hypothesis  that  feet  were  designed  to  manipulate   objects,  they  would  reach  opposite  conclusions  about  the  performance  of  feet.  Yet   this  isn’t  likely  to  be  the  case  of  reasoning:  there  seems  to  be  a  broad  agreement  that   reasoning  serves  to  better  individual  cognition.  By  correcting  mistaken  intuitions,   reasoning  is  supposed  to  allow  people  to  reach  better  beliefs  and  to  make  better  

decisions  (Evans  &  Over,  1996;  Kahneman,  2003;  Stanovich,  2004).  This  can  be   called  the  classical  view  of  reasoning.   Clearly,  reasoning  can  do  these  things:  people  sometimes  reason  their  way  to   better  beliefs  and  better  decisions.  Yet  the  empirical  literature  accumulated   demonstrates  that  reasoning  does  not  always  lead  to  such  positive  outcomes—far   from  it.  Reasoning  often  fails  to  correct  misguided  intuitions  (Denes-­‐Raj  &  Epstein,   1994).  Sometimes  it  even  makes  things  worse,  for  instance  by  making  people   overconfident  in  their  mistaken  intuitions  (Koriat,  Lichtenstein,  &  Fischhoff,  1980).   There  is  a  major  mismatch  between  what  is  commonly  thought  to  be  the  function  of   reasoning  and  the  performance  of  reasoning:  reasoning  doesn’t  do  well  what  it  is   supposed  to  do.   One  possibility  would  be  that  reasoning  is  simply  a  lousily  designed   mechanism.  This  hypothesis  however  faces  two  problems.  The  first  is  that   psychologists  studying  intuitions  keep  coming  up  with  demonstrations  of  their   quasi-­‐optimal  behavior  (e.g.  Balci,  Freestone,  &  Gallistel,  2009;  Spellman,  1993;   Trommershauser,  Maloney,  &  Landy,  2008).  Why  would  reasoning  be  such  a   flagrant  exception?  And  how  could  such  a  flawed  mechanism  correct  seemingly   much  more  efficient  intuitions?  The  second  objection  is  that  reasoning  does  not   simply  introduce  random  error,  as  would  be  expected  of  a  shabby  mechanism.   Reasoning  misleads  people  in  predictable  ways,  mostly  by  strengthening—instead   of  correcting—misguided  intuitions.  The  regularities  in  reasoning’s  mistakes  are  a   good  sign  that  the  mismatch  between  function  and  performance  does  not  come   simply  from  poor  performance.  Instead,  we  should  reconsider  the  function  of   reasoning.   Psychologists  often  rely  on  their  intuitions,  or  on  naïve  theories  to  determine   what  is  the  function  of  psychological  mechanisms.  In  some  cases  their  intuitions  are   likely  not  to  be  too  far  from  the  correct  answer.  For  instance,  when  Marr  postulates   that  the  main  function  of  the  visual  system  is  to  create  a  representation  of  the   outside  world,  this  is  likely  to  be  a  reasonably  good  approximation  of  the  actual   function  of  the  visual  system  (Marr,  1982).  However,  evolutionary  psychologists   have  argued  that  it  is  preferable  to  rely  on  evolutionary  theory  to  generate  

hypotheses  about  the  function  of  cognitive  devices  (e.g.  Barkow,  Cosmides,  &  Tooby,   1992).   Following  the  heed  of  evolutionary  psychologists,  Sperber  relied  on   evolutionary  theory  to  assign  reasoning  another  function.  As  outlined  in  section  1,   he  used  the  framework  of  the  evolution  of  communication  to  suggest  that  the   function  of  reasoning  is  argumentative:  producing  arguments  to  convince  other   people  and  checking  other  people’s  arguments  (Sperber,  2000,  2001).  On  the   strength  of  this  premise  it  has  been  possible  to  account  for  the  bewildering  pattern   of  performance  mentioned  above.  The  goal  of  the  present  chapter  is  to  briefly   present  this  evidence  and  show  how  recruiting  evolutionary  thinking  can  help   discern  broad  trends  emerging  from  different  disciplines.  I  will  start  by   summarizing  the  evolutionary  rationale  for  the  argumentative  theory  of  reasoning.   Section  2  specifies  what  reasoning  is  thought  to  be  in  this  theory.  All  the  remaining   sections—3  to  13—review  the  predictions  of  the  argumentative  theory  regarding   the  performance  of  reasoning,  drawing  from  many  areas  of  psychology.     1.  Argumentation  and  the  evolution  of  reasoning     Within  the  Primate  order,  humans  rely  on  communication  to  an  unprecedented   extent.  They  derive  enormous  benefits  form  communication:  information  about   food,  about  dangers,  about  other  people,  about  techniques,  etc.  But  communication   also  entails  costs:  people  can  be  lied  to,  manipulated,  misled.  Yet,  overall,   communication  has  to  be  beneficial  both  to  senders  and  receivers.  The  logic  behind   this  conclusion  is  very  simple:  if  senders  do  not  benefit  from  communication,  they   stop  sending  (i.e.  they  evolve  to  stop  sending).  Likewise,  if  receivers  do  not  benefit   from  communication,  they  stop  receiving  (Dawkins  &  Krebs,  1978;  Krebs  &   Dawkins,  1984).  Given  the  dangers  faced  by  receivers,  there  must  exist  some   mechanism  to  ensure  that  communication  is  mostly  honest—since  dishonest   communication  tends  not  to  be  beneficial.  Different  solutions  can  be  found   throughout  the  animal  kingdom  (Maynard  Smith  &  Harper,  2003),  but  humans   mostly  rely  on  the  filtering  of  communicated  information  by  mechanisms  of  

epistemic  vigilance  (Sperber  et  al.,  2010).  Two  of  the  major  mechanisms  of  epistemic   vigilance  are  trust  calibration  and  coherence  checking.  People  do  not  trust  others   uniformly:  they  grant  more  weight  to  people  they  deem  to  be  competent  and   benevolent.  People  also  test  the  coherence  of  what  they  are  told  against  their   previous  beliefs,  and  tend  to  reject  incoherent  communication.  Yet  both   mechanisms  reject  too  much  information:  when  someone  you  don’t  trust  enough   tells  you  something  that  clashes  with  your  beliefs,  you  are  likely  to  reject  the   message,  thereby  missing  out  on  some  potentially  valuable  information.  Sperber   (2000,  2001)  suggested  that  argumentation  a  solution  to  this  trust  bottleneck.   Senders  can  provide  reasons  to  accept  the  messages  they  want  to  transmit.   Receivers  can  then  evaluate  these  reasons  to  determine  whether  they  should  accept   the  conclusion  or  not.  As  a  result,  more  information  passes  between  senders  and   receivers,  making  them  both  better  off.  Reasoning  is  the  mechanism  that  evolved  to   allow  people  to  find  and  evaluate  arguments  (Mercier  &  Sperber,  2009).    

The  hypothesis  that  reasoning  has  an  argumentation  function—and  

therefore  a  social  function—fits  very  well  with  current  thought  about  the  crucial   role  of  the  social  environment  for  human  evolution.  (R.  W.  Byrne  &  Whiten,  1988;  R.   I.  M.  Dunbar,  1996;  Hrdy,  2009;  Humphrey,  1976;  Tomasello,  Carpenter,  Call,  Behne,   &  Moll,  2005;  Whiten  &  R.  W.  Byrne,  1997).  In  particular,  cooperation  may  have   been  the  main  driver  behind  human  evolution  (Dubreuil,  2010;  Sterelny,  In  press),   and  high  levels  of  cooperation  require  extensive  communication.    

Others  have  suggested  social  functions  for  reasoning.  Some  of  these  

suggestions  are  close  to  the  proposal  defended  here  (Billig,  1996;  Gibbard,  1990;   Haidt,  2001).  Another  hypothesis  has  been  that  argumentation  evolved  to  show  off   one’s  reasoning  skills  (Dessalles,  2007;  Frankish,  2011).  Undoubtedly,   argumentation  is  sometimes  put  to  such  use,  but  the  same  thing  is  true  of  just  about   any  skill,  from  language  in  general  to  athletic  performance.  However,  such  displays   are  relevant  because  the  skills  displayed  have  other  uses,  and  so  we  are  back  to  the   question  of  what  is  the  original  function  of  reasoning  (Mercier  &  Sperber,  2011a).    A   different  suggestion  is  that  reasoning  may  have  a  dialogic  structure,  but  that  its   function  is  still  to  better  individual  cognition  (Godfrey-­‐Smith  &  Yegnashankaran,  

2011).  However,  the  pattern  of  performance  mentioned  above  is  also  incompatible   with  this  perspective,  since  it  should  predict  that  reasoning  works  for  the   betterment  of  individual  cognition.  Finally,  it  has  also  been  suggested  that  reasoning   for  individual  ends  is  merely  an  exaptation  of  reasoning  for  social  ends  (Evans,   2011;  Frankish,  2011).  In  several  of  the  following  sections,  I  will  argue  that  some  of   the  features  of  reasoning  are  not  compatible  with  such  a  suggestion:  they  serve  an   argumentative  function  well,  but  an  individualistic  function  poorly.     2.  Intuitive  and  reflective  inferences     How  would  a  mechanism  designed  to  produce  and  evaluate  arguments  work?  First   of  all,  it  has  to  be  metarepresentational:  arguments  are  representations,  not  other   objects  in  the  world.  This  gives  reasoning,  as  it  does  other  metarepresentationa   mechanisms,  an  appearance  of  generality.  For  instance,  using  mentalizing—a   metarepresentational  skill—one  can  attribute  just  about  any  thought  one  can   entertain  to  anyone  else.  Similarly,  reasoning  can  process  arguments  about   Camembert  and  quarks,  sometimes  in  one  stride  (Fodor,  1983).  But  reasoning  is  still   a  specialized,  modular  mechanism.  It  exerts  a  specific  function:  finding  and   evaluating  reasons.  It  accepts  a  specific  type  of  input:  representations  (as  opposed   to,  say,  perceptual  information).  And  it  performs  a  specific  operation  on  these   representations:  gauging  the  degree  of  support  of  one  for  another.  Given  that  the   current  proposal  was  inspired  by  an  evolutionary  argument,  it  is  good  that  it  should   fit  with  the  evolutionarily  inspired  notion  that  the  mind  is  massively  modular   (Sperber,  1994).    

In  order  to  perform  its  function,  reasoning  must  rely  on  a  set  of  intuitions,  

intuitions  about  what  a  good  reason  is.  For  instance,  when  we  read  Descartes’  “I   think  therefore  I  am”  we  think  it  is  a  good  argument.  Yet  we  do  not  know  why  we   make  such  an  evaluation:  it  is  an  intuitive  judgment,  just  as  when  we  deem  someone   to  be  trustworthy  when  first  meeting  them.    

These  two  traits  distinguish  the  argumentative  theory  of  reasoning  from  

other  dual  process  theories.  First,  it  stresses  the  intuitive  side  of  reasoning.  When  

reasoning  is  used  in  a  dialogical  context  it  can  be  fast  and  effortless,  relying  on   intuitions  to  which  we  have  no  introspective  access.  Second,  it  characterizes   reasoning  in  a  more  specific  fashion  than  most  dual  process  models.  It  is  probable   that  what  is  usually  referred  as  system  2  comprises  in  fact  several  mechanisms:   reasoning  but  also  parts  of  planning,  imagination,  consequential  thinking  and   possibly  others  (Mercier  &  Sperber,  2011a).    

At  the  time  being,  an  algorithmic  description  of  reasoning  following  from  the  

argumentative  theory  hasn’t  been  fully  specified  (for  a  first  stab,  see  Mercier,   submitted),  but  there  is  no  a  priori  reason  it  can  not  be  compatible  with  any  of  the   existing  theories  of  reasoning  (Johnson-­‐Laird,  2006;  Oaksford  &  Chater,  2001;  Rips,   1994).  The  remaining  of  the  chapter  reviews  the  predictions  made  by  the   argumentative  theory  about  reasoning’s  performance  and  some  of  its  most  striking   features.     3/  Argumentation  skills       The  most  straightforward  prediction  of  the  argumentative  theory  is  that  reasoning   should  be  good  at  doing  what  it  evolved  to  do,  namely  finding  and  evaluating   arguments  in  dialogical  contexts.  This  prediction  does  not  allow  distinguishing   between  the  argumentative  theory  and  other  hypotheses  about  the  function  of   reasoning,  since  they  could  make  the  same  prediction.  However,  it  is  necessary  to   test  it  since  many  scholars  have  questioned  the  value  of  argumentative  skills  and,  if   their  doubts  were  truly  founded,  the  current  theory  would  be  falsified.    

Argument  production  is  often  thought  to  suffer  from  two  main  flaws:  

superficiality  and  confirmation  bias.  People  would  tend  to  find  arguments  that   “make  superficial  sense”  (Perkins,  1985,  p.568)  and  that  only  support  their  point  of   view  (Kuhn,  1991).  The  question  of  bias  is  addressed  in  the  next  section.  As  for  the   apparent  superficiality  of  arguments,  it  is  in  fact  not  incompatible  with  the   predictions  of  the  current  theory,  for  two  reasons.  The  first  is  that  reasoning  should   not  be  expected  to  invest  a  lot  of  time  and  energy  in  finding  foolproof  arguments.  In   a  discussion,  one  can  try  several  times  before  convincing  one’s  interlocutor:  there  is  

no  penalty  for  not  starting  out  with  the  best  argument.  On  the  contrary,  the   interlocutor  can  make  the  work  of  reasoning  easier  by  explaining  the  source  of  her   disagreement,  thereby  allowing  the  speaker  to  suggest  more  appropriate   arguments.  As  a  result,  we  should  expect  an  improvement  in  argument  quality  as  the   discussion  progresses;  which  leads  us  to  the  second  reason  why  people’s  arguments   are  often  deemed  by  psychologists  to  be  superficial.  In  most  experimental  settings,   there  is  no  interaction,  participants  are  not  given  the  opportunity  to  progressively   refine  their  arguments.  Psychologists  thus  study  almost  exclusively  what  is  likely  to   be  the  less  refined  products  of  reasoning.  When  the  interaction  is  left  to  run  its   course,  “participants  .  .  .  appear  to  build  complex  arguments  and  attack  structure.   People  appear  to  be  capable  of  recognizing  these  structures  and  of  effectively   attacking  their  individual  components  as  well  as  the  argument  as  a  whole”  (Resnick,   Salmon,  Zeitz,  Wathen,  &  Holowchak,  1993,  pp.  362–  63).  Kuhn  and  her  colleagues   have  observed  similar  improvements  in  reasoning  as  the  result  of  sustained  debate   (Kuhn  &  Crowell,  2011;  Kuhn,  Shaw,  &  Felton,  1997).    

Turning  to  argument  evaluation,  many  people  claim  that  it  is  also  affected  by  

the  confirmation  bias:  people  would  spontaneously  discount  arguments  whose   conclusion  clashes  with  their  previous  views  (e.g.  Klaczynski,  2000).  However,  the   explanation  of  the  confirmation  bias  to  be  offered  in  the  next  section  does  not  apply   to  argument  evaluation,  which  should  be  substantially  more  objective.  The  problem   is  that  it  is  very  hard,  in  a  typical  experimental  setting,  to  disentangle  argument   evaluation  and  argument  production.  Except  in  formal  domains,  arguments  are   barely  ever  conclusive:  conviction  is  more  often  the  outcome  of  a  sustained  debate   rather  than  of  a  single  argument.  When  a  participant  is  confronted  with  an   argument  and  asked  to  evaluate  it,  she  may  evaluate  it  relatively  objectively  but,   since  she  will  not  have  been  entirely  convinced,  she  will  then  engage  in  a  search  for   counterarguments.  The  search  for  counterarguments  will  display  a  confirmation   bias  that  is  bound  to  taint  the  evaluation  of  the  argument.  The  best  way  to  study   argument  evaluation  is  in  the  context  of  a  dialogue:  if  argument  evaluation  were  as   biased  as  some  suggest,  people  would  resist  almost  any  attempt  at  changing  their   minds.  Yet  as  will  be  shown  in  section  7,  people  often  change  their  mind  when  they  

argue  with  others.  Finally,  it  should  be  noted  that  despite  these  methodological   shortcomings,  it  is  possible  to  show  that  when  participants  are  motivated  they  grant   more  weight  to  strong  arguments  than  to  weak  ones—although  they  may  still  have   an  overall  bias  towards  their  own  view  (Petty  &  Wegener,  1998).1      

 

4/  The  confirmation  bias     When  people  are  trying  to  convince  someone  else,  they  are  mostly  interested  in   arguments  supporting  their  position  and  going  against  that  of  the  interlocutor.  If   one  of  the  functions  of  reasoning  is  to  produce  arguments  in  such  contexts,  we   should  expect  reasoning  to  display  a  confirmation  bias,  which  consists  in  “seeking  or   interpreting  of  evidence  in  ways  that  are  partial  to  existing  beliefs,  expectations,  or  a   hypothesis  in  hand”  (Nickerson,  1998,  p.  175).  In  other  words,  the  confirmation  bias   is  a  feature  of  reasoning,  not  a  flaw.      

Alternative  explanations  for  the  confirmation  bias  are  difficult  to  sustain.  The  

confirmation  bias  is  not  only  observed  in  emotionally  charged  situation,  but  also  in   abstract  reasoning  tasks  (e.g.  Evans,  1996).  It  does  not  result  from  a  lack  of  effort:   asking  people  to  be  more  objective  (Lord,  Lepper,  &  Preston,  1984),  or  paying  them   to  reach  the  correct  answer  (Johnson-­‐Laird  &  R.  M.  J.  Byrne,  2002)  has  little  effect.   More  importantly,  the  confirmation  bias  does  not  reflect  a  lack  of  ability,  or  the   intrinsic  difficulty  of  falsification.  When  people  are  confronted  with  statements  they   disagree  with,  they  very  easily  find  ways  to  falsify  them—thereby  confirming  their   own  initial  hunch  (Cowley  &  R.  M.  J.  Byrne,  2005;  Dawson,  Gilovich,  &  Regan,  2002;   Sacco  &  Bucciarelli,  2008).    

The  argumentative  theory  also  makes  two  interesting  predictions  about  the  

confirmation  bias.  First,  it  should  mostly  be  observed  in  the  production  of   arguments  and  not  in  their  evaluation.  The  presence  of  the  confirmation  bias  in                                                                                                                   1  I  can  only  scratch  the  surface  of  the  issue  here,  and  the  interested  reader  is   referred  to  the  debate  between  Mercier  and  Sperber  (2011a,  2011b)  and  Harrell   (2011),  Kuhn  (2011)  and  Wolfe  (2011).  See  also,  on  the  role  of  learning  for   argumentation  skills,  Mercier  (in  press).  

argument  production  is  established  beyond  reasonable  doubt.  Its  absence  or,  at  any   rate,  strong  attenuation  in  argument  evaluation  is  suggested  by  the  good  results  of   group  reasoning  reviewed  in  section  7.  The  second  prediction  is  that  the   confirmation  bias  should  only  affect  reasoning,  and  not  other  cognitive  processes.   For  instance,  a  predator  detection  mechanism  that  would  have  a  systematic   tendency  to  confirm  early  judgments—even  judgments  that  there  are  no  predators   around—would  clearly  be  detrimental  to  fitness.  In  line  with  this  idea,  several  tasks   that  used  to  be  explained  in  terms  of  confirmation  bias  are  now  being  described  as   resulting  from  sound  intuitive  heuristics.  The  2,  4,  6  task  was  thought  to   demonstrate  a  confirmation  bias  in  hypothesis  testing  (Wason,  1960),  whereas  in   fact  it  reflects  the  process  of  a  sound  positive  testing  heuristic  (Klayman  &  Ha,   1987).  Likewise,  the  failure  of  most  participants  to  solve  the  Wason  selection  task   was  pinned  on  problems  grasping  falsification  (Wason,  1966),  whereas  it  merely   reflects  the  operation  of  intuitive  pragmatic  mechanisms  (Sperber,  Cara,  &  Girotto,   1995).  In  both  cases,  it  is  reasoning  that  displays  a  confirmation  bias:  whatever   hunch  the  intuitions  provide,  reasoning  fails  to  check,  instead  finding  arguments  in   its  support  (Poletiek,  1996;  Roberts  &  Newton,  2001).    

The  prevalence  and  robustness  of  the  confirmation  bias  are  thorns  in  the  side  

of  the  classical  view  of  reasoning:  why  would  reasoning  be  endowed  with  a  feature   that  systematically  leads  to  epistemic  distortions?  The  problem  is  compounded  by   the  extent  of  the  damage  that  can  be  wreaked  by  the  confirmation  bias.  It  should  be   stressed,  however,  that  the  confirmation  bias  does  not  have  to  have  negative   epistemic  consequences.  In  the  proper  group  setting,  the  bias  can  become  a  form  of   division  of  cognitive  labor,  with  each  participant  exploring  the  pros  of  her  ideas  and   the  cons  of  the  other,  rather  than  each  having  to  exhaustively  research  the  pros  and   cons  of  every  possibility.  This  explains  in  part  the  good  performance  of  reasoning  in   group  reviewed  in  section  7.     5/  Motivated  reasoning    

People  often  anticipate  potential  disagreements  by  reasoning  alone  to  find   arguments  defending  their  decisions  or  beliefs.  When  they  do  so,  the  confirmation   bias  is  given  free  reins,  as  people  are  unlikely  to  critically  evaluate  their  own   arguments.  As  a  result,  people  just  end  up  accumulating  arguments  supporting  their   original  intuition.  The  outcomes  of  this  process  have  been  documented  in  many   experiments.  A  first  consequence  is  belief  polarization.  When  participants  are  left  to   reason  about  an  attitude  object,  their  attitudes  become  stronger  in  the  direction  of   their  initial  hunch  (e.g.  Tesser,  1978).  A  second  consequence  is  overconfidence.   When  people  reason  about  their  answers  to  general  knowledge  tests,  they  are  apt  to   mostly  find  arguments  supporting  their  initial  intuition,  making  them  unduly   confident  (Koriat  et  al.,  1980).    

Another  consequence  of  reasoning  alone  is  belief  perseverance.  When  people  

start  reasoning  about  a  belief  they  have  formed,  they  create  a  scaffold  of  arguments   around  it.  If  the  initial  motivation  for  the  formation  of  the  belief  is  shown  to  be   erroneous,  the  scaffold  allows  people  to  hang  on  to  discredited  beliefs  (e.g.  Guenther   &  Alicke,  2008;  Ross,  Lepper,  &  Hubbard,  1975).   All  of  these  effects  can  be  put  under  the  general  umbrella  of  motivated   reasoning  (see,  for  review,  Kunda,  1990;  Mercier  &  Sperber,  2011b).  Some  scholars   have  argued  that  motivated  reasoning  is  but  a  special  type  of  reasoning,  which  could   be  opposed  to  a  more  objective  type  of  reasoning  (Kruglanski  &  Freund,  1983;   Kunda,  1990).  It  is  true  that  certain  factors  can  restrain  the  power  of  the   confirmation  bias  for  lone  reasoners.  However,  the  most  efficient  way  to  attenuate   motivated  reasoning  seems  to  be  accountability  (see  for  instance  the  studies  listed   in  support  of  the  existence  of  objective  reasoning  in  Kunda,  1990).  In  specific   conditions,  having  to  justify  one’s  actions  to  an  audience  can  make  people  anticipate   potential  counterarguments  and  impose  higher  criteria  on  the  arguments  they   generate.  As  a  result,  they  may  find  themselves  unable  to  satisfactorily  defend  their   initial  intuition  and  thus  change  their  mind—often  for  the  better  but  not  always   (Lerner  &  Tetlock,  1999).    

The  effects  of  accountability  are  compatible  with  the  present  view.  The  

argumentative  theory  does  no  suggest  that  the  lone  reasoner  engages  in  wishful  

thinking.  On  the  contrary,  the  goal  of  internal  argumentation  is  to  make  sure  that   some  minimally  decent  arguments  are  available  to  defend  our  beliefs  or  actions.   When  participants  find  themselves  unable  to  produce  such  arguments,  they  do   change  their  mind.  Many  experiments  have  demonstrated  that  participants  can  be   made  more  likely  to  engage  in  questionable  behaviors  when  excuses  are  provided— when,  for  instance,  the  existence  of  free-­‐will  is  questioned  (Vohs  &  Schooler,  2008).   Such  results  can  only  be  obtained  if  reasoning,  in  the  control  condition,  is  unable  to   find  an  excuse:  otherwise  participants  would  behave  equally  immorally  in  all   conditions.  Accountability  does  not  radically  change  the  operation  of  reasoning,  it   simply  raises  the  criteria  used  to  decide  what  is  a  good  argument.     6/  Reasoning  and  decision  making     Because  of  its  confirmation  bias,  reasoning  can  lead  to  poor  epistemic  and  practical   consequences.  But  maybe  reasoning  would  be  able  to  drive  people  towards  better   decisions  when  they  have  weak  intuitions?  Unfortunately,  that  does  not  seem  to  be   the  case.  When  participants  are  faced  with  choices  for  which  they  lack  strong   intuitions,  more  reasoning  often  leads  to  worse  decisions.  The  choices  made  after   reasoning  can  be  objectively  worse  (Dijksterhuis,  2004),  less  in  lines  with  expert   judgment  (T.  D.  Wilson  &  Schooler,  1991),  or  they  can  lead  to  less  satisfaction  for  the   participant  herself  (T.  D.  Wilson  et  al.,  1993).    

The  argumentative  theory  is  in  a  good  position  to  explain  this  poor  

performance  of  reasoning.  When  intuitions  are  weak,  reasoning  cannot  simply  find   arguments  supporting  a  preexisting  hunch.  Instead,  it  looks  for  arguments  that   could  support  different  intuitions.  As  a  result,  the  intuition  that  is  easiest  to  justify   ends  up  being  chosen.  Being  easy  to  justify,  however,  does  not  correlate  exactly  with   being  the  best  option.    

Within  the  field  of  decision  making,  an  important  strand  of  research  has  

examined  reason-­based  choice  “the  idea  that  individual  choice  behavior  under   preference  uncertainty  [i.e.  when  intuitions  are  weak]  can  be  better  understood   when  seen  as  based  on  the  available  reasons  or  justifications  for  and  against  each  

alternative”  (Simonson,  1989,  p.158).  Reason-­‐based  choice  fits  with  the  prediction   of  the  argumentative  theory.  To  provide  but  an  example,  the  introduction  of  a   strictly  dominated  alternative  in  a  choice  set  can  change  people’s  choices  (the   attraction  effect,  see  Huber,  Payne,  &  Puto,  1982).  If  participants  are  ambivalent   between  a  cheap  mediocre  beer  (A)  and  an  expensive  good  one  (B),  the  introduction   of  a  beer  that  is  not  better  than  B  but  more  expensive  (C)  will  make  people  pick  B   because  it  becomes  the  easiest  choice  to  justify  (Simonson,  1989).    

Many  poor  decisions  can  be  explained—in  whole  or  in  part—as  reason-­‐based  

choices  (for  a  short  review,  see  Mercier  &  Sperber,  2011b,  sections  5.2  and  5.3).  It   should  be  stressed,  however,  that  the  research  on  reason-­‐based  choice  has  focused   on  its  negative  consequences  and  that  it  can  also  lead  to  positive  outcomes.    

First,  there  is  often  an  overlap  between  being  easy  to  justify  and  being  a  good  

decision.  For  instance,  when  a  student  tries  to  solve  a  mathematical  or  a  physics   problem,  the  answer  that  is  easiest  to  justify  is  likely  to  be  the  best—at  least  if  the   student  has  been  taught  the  correct  principles  of  mathematics  and  physics.  This   proviso  is  crucial:  the  efficiency  of  reasoning  in  cases  of  weak  intuitions  does  not   depend  so  much  on  the  ability  of  the  reasoner,  as  on  the  cultural  knowledge  she  has   accumulated.  The  second  positive  consequence  of  reason-­‐based  choice  is  social.  By   picking  the  option  that  is  most  easily  justified,  decision  makers  may  be  better  judged   by  others.  For  instance,  reasoning  leads  people  to  choose  unsatisfying  electronic   gadgets  laden  with  useless  features  (Thompson,  Hamilton,  &  Rust,  2005).  Yet  when   someone  chooses  the  more  complex  gadgets  she  is  likely  to  be  perceived— ironically—as  more  technology  savvy  (Thompson  &  Norton,  2008).     7/  Good  performance  in  group       The  picture  painted  so  far  is  bleak.  When  people  reason  on  their  own,  reasoning   often  leads  them  astray,  either  through  the  confirmation  bias  or  through  reason-­‐ based  choice.  But  the  argumentative  theory  also  predicts  that  reasoning  should  lead   to  good  outcomes  when  it  is  used  in  its  normal  conditions.  The  normal  conditions   for  reasoning—the  conditions  for  which  it  evolved  (Millikan,  1987)—are  those  of  

deliberation  (Mercier  &  Landemore,  in  press).  Deliberation  can  be  understood  here   as  an  exchange  of  arguments  between  at  least  two  people  who  disagree  about   something  but  also  seek  a  satisfying  solution:  figuring  out  who  is  right.  In  these   conditions  reasoning  does  work  well.  A  good  example  is  that  of  the  Wason  selection   task,  which  generally  produces  approximately  10%  of  correct  answers  despite  being   logically  trivial.  When  participants  have  to  solve  the  task  in  group,  their   performance  improves  dramatically,  reaching  for  instance  80%  in  one  study   (Moshman  &  Geil,  1998).  For  logical  or  mathematical  tasks,  it  is  observed  that  ‘truth   wins’:  as  long  as  one  participant  has  understood  the  problem,  she  is  nearly  always   able  to  convince  everybody  that  her  answer  is  correct  (Laughlin  &  A.  L.  Ellis,  1986).   As  a  result,  groups  vastly  outperform  individuals.    

According  to  the  argumentative  theory,  groups  are  able  to  reach  better  

outcomes  than  individuals  because  in  deliberative  settings  arguments  can  be   thoroughly  evaluated.  The  confirmation  bias  of  the  different  participants  is  held  in   check  by  other  participants  who  do  not  share  their  point  of  view.  Poor  arguments   are  rejected  and  good  ones  carry  the  day,  leading  the  group  to  the  best  answer.   The  increase  in  performance  in  group  settings  could  have  other  explanations,   however.  Groups  could  generally  motivate  people  to  spend  more  effort  in  the  task.   This  is  not  the  case:  for  most  tasks,  group  performance  is  below  expectations,  in   large  part  because  group  members  put  in  less  effort  than  when  they  perform  as   individuals  (e.g.  Hill,  1982).  Another  explanation  could  be  that  group  members   simply  pick  out  the  smartest  (Oaksford,  Chater,  &  Grainger,  1999)  or  the  most   confident  (Opfer  &  Sloutsky,  2011)  individuals  and  follow  their  lead.  However,   transcripts  demonstrate  that  a  substantial  amount  of  discussion  is  necessary  to   reach  an  agreed  upon  solution:  the  individual  with  the  correct  answer  has  to   convince  everyone,  simply  stating  the  answer  has  little  effect  (e.g.  Trognon,  1993).   Moreover,  groups  can  converge  on  a  good  answer  that  no  member  had  thought  of   prior  to  the  discussion,  giving  rise  to  the  assembly  bonus  effect,  when  groups   outperform  their  best  members  (e.g.  Laughlin,  Bonner,  &  Miner,  2002;  Michaelsen,   Watson,  &  Black,  1989;  Sniezek  &  Henry,  1989).  

The  very  good  results  obtained  for  logical  and  mathematical  tasks  can  also  be   obtained—if  maybe  in  an  attenuated  form—in  other  types  of  tasks,  such  as   inductive  problems  (e.g.  Laughlin,  VanderStoep,  &  Hollingshead,  1991).  More   importantly,  they  are  also  observed  outside  of  the  laboratory,  in  politics,  law,   science,  business  (see  section  13)  and  schools  (see  section  11).     8/  Poor  performance  in  group       The  argumentative  theory  does  not  predict  that  reasoning  always  leads  to  felicitous   outcomes  in  groups.  Groups  can  violate  the  normal  conditions  for  the  use  of   reasoning  just  as  much  as  individual  reasoners.  If  everybody  agrees  to  start  with— or  if  dissenting  voices  are  not  given  any  credit—arguments  will  not  be  critically   examined  by  the  group.  Instead,  a  scaled  up  version  of  individual  polarization  is   likely  to  take  place.  Group  members  are  apt  to  generate  different  arguments  to   support  the  consensual  view,  arguments  that  will  only  strengthen  every  group   member’s  convictions.  Thus  the  argumentative  theory  can  explain  group   polarization—although  other  psychological  mechanisms,  such  as  conformity,  are   also  likely  to  play  a  role  (Isenberg,  1986).    

It  should  be  stressed  that  group  polarization  is  not  a  ‘law’—contra  Sunstein  

(2002).  It  mostly  occurs  in  very  specific  circumstances:  when  a  group  argues  over  a   topic  the  members  agree  about—when  there  is  disagreement,  depolarization  is   more  often  observed  (e.g.,  Vinokur  &  Burnstein,  1978).  Reasoning  is  typically   triggered  by  a  disagreement,  real  or  anticipated:  people  who  agree  with  each  other   should  not  start  arguing.  Debate  between  like-­‐minded  people  is  thus  somewhat   artificial.  For  instance,  when  a  jury  agrees  that  a  defendant  is  guilty,  but  must  debate   and  justify  the  amount  of  the  fine,  the  deliberation  leads  to  polarization,  with   increased  fines  (Schkade,  Sunstein,  &  Kahneman,  2000).  The  deliberation  is  forced   on  the  jury  by  institutional  constraints,  and  voting  may  have  been  more  efficient.     9/  Moral  psychology      

The  strongest  echoes  of  the  argumentative  theory  are  found  in  the  domain  of  moral   psychology.  As  many  other  branches  of  psychology,  the  study  of  moral  judgments   and  decisions  is  now  dominated  by  dual  process  models.  These  models  concur  in   positing  that  intuitions  and  emotions  are  the  main  drivers  of  moral  judgments,  but   they  differ  in  the  exact  weight  to  grant  reasoning:  relatively  small  in  Haidt’s  view   (Haidt,  2001),  more  important  for  Greene  and  his  collaborators  (e.g.  Paxton  &   Greene,  2010).  In  line  with  Haidt’s  view,  the  argumentative  theory  claims  that  most   of  individual  reasoning  is  the  post-­‐hoc  rationalization  of  preexisting  intuitions.  But  it   also  suggests,  as  does  Greene  (e.g.  Paxton,  Ungar,  &  Greene,  in  press),  that  moral   reasoning  can  change  even  deep-­‐seated  moral  intuitions.  Finally,  it  specifies  that   reasoning  is  more  likely  to  influence  moral  intuitions  in  the  context  of  a  discussion   than  in  private  reasoning.    

That  reasoning  is  often  used  to  rationalize  and  justify  pre-­‐existing  intuitions  

has  been  demonstrated  in  many  experiments  (e.g.  Bandura,  Barbaranelli,  Caprara,  &   Pastorelli,  1996;  Chance  &  Norton,  2008;  Uhlmann,  Pizarro,  Tannenbaum,  &  Ditto,   2009).  In  a  direct  demonstration  of  the  effects  of  reasoning,  participants  who  could   not  reason—because  of  an  increased  cognitive  load—deliverer  fairer  judgments,  as   they  could  not  find  justifications  for  unfairly  favoring  their  own  position  (Valdesolo   &  DeSteno,  2008).    

The  results  of  group  reasoning  are  harder  to  interpret.  Importantly,  the  

prediction  of  the  argumentative  theory  is  not  that  reasoning  should  consistently   lead  groups  towards  more  moral  decisions.  Instead,  reasoning  should,  in  the  right   conditions,  lead  groups  to  better  decisions,  whether  these  are  more  or  less  moral.   For  instance,  when  groups  are  confronted  with  economic  games,  they  behave  more   in  line  with  the  predictions  of  game  theory,  but  they  are  not  more  altruistic  (e.g.   Bornstein  &  Yaniv,  1998).  Still,  it  can  be  argued  that  group  reasoning  often  leads  to   sounder  moral  judgments—moral  judgments  that  are  more  likely  to  adequately   track  future  behavior  for  instance  (see  Mercier,  in  press-­‐b).    

On  the  wider  scale  of  societal  changes,  several  authors  have  argued  that  

narratives  play  a  more  important  role  than  arguments  in  bringing  about  moral   change  (Bloom,  2010;  Haidt  &  F.  Bjorklund,  2007).  The  argumentative  theory  

predicts  that  reasoning  is  at  its  best  in  interactive  dialogues,  not  in  unidirectional   public  speeches,  and  so  it  may  not  be  surprising  that  narratives  and  appeals  to   emotions  are  more  frequent  in  the  later.  However,  it  can  also  be  argued  that   ‘everyday  talk’,  in  which  argumentation  can  function  effectively,  is  a  crucial  driver  of   large  scale  moral  change  (e.g.  Mansbridge,  1999).     10/  Cross  cultural  psychology         Psychologists  ought  to  be  wary  when  they  claim  to  be  studying  universal  cognitive   mechanisms.  Because  the  majority  of  psychological  experiments  is  conducted  within   WEIRD  (Western  Educated  Industrialized  Rich  Democratic)  countries—more   specifically  on  American  undergraduates—it  may  not  be  warranted  to  extend  their   conclusions  to  other  cultural  groups  (Henrich,  Heine,  &  Norenzayan,  2010).  As  most   evolutionary  theories,  the  argumentative  theory  makes  predictions  that  should   apply  to  nearly  all  (non-­‐pathological)  humans.2  In  the  case  of  argumentation,  some   may  suspect  that  its  practice  has  been  developed,  or  at  least  that  it  is  mainly  relied   on  in  modern,  Western  populations.  Maybe  other  cultural  groups  rely  more  on   solitary  reasoning  and  are  better  at  it?  The  two  main  groups  that  people  have   suspected  of  being  endowed  with  reasoning  abilities  differing  from  those  of   Westerners  are  illiterate  populations  and  Eastern  cultures,  which  I  examine  in  turn   (the  more  extensive  argument  is  in  Mercier,  2011).    

There  is  a  sadly  rich  history  in  early  anthropology  and  cultural  psychology  to  

deny  the  members  of  illiterate  populations  the  ability  to  reason  in  the  sense   discussed  here.  For  instance,  Luria  and  his  colleagues  carried  out  experiments   trying  to  demonstrate  that  illiterate  Russian  peasants  could  not  solve  even  trivial   syllogisms  such  as  “In  the  Far  North,  where  there  is  snow,  all  bears  are  white.   Novaya  Zemla  is  in  the  Far  North.  What  colors  are  bears  there?”  (Luria,  1976,  p.                                                                                                                   2  The  ‘nearly’  caveat  is  necessary  as  otherwise  a  singly  cultural  group  could  disprove   an  evolutionary  theory  (on  ‘quasi  universals’,  see  Norenzayan  &  Heine,  2005).  In  the   same  way  as  monks  vowing  chastity  does  not  mean  that  humans  do  not  have  an   evolved  sex  drive,  monks  vowing  silence  does  not  mean  that  they  do  not  have  an   evolved  capacity  to  communicate,  and  argue.  

107).  If  the  performance  was  indeed  abysmal,  the  diagnostic  was  mistaken  as  in   different  circumstances  similar  populations  can  solve  these  problems.  In  particular,   explicitly  setting  problem  in  a  hypothetical  world  substantially  improves   performance  (Dias,  Roazzi,  &  Harris,  2005).  Anecdotally,  Cole  and  his  colleagues   (Cole,  Gay,  Glick,  &  Sharp,  1971)  observed  in  the  African  population  they  were   studying  that  “when  engaged  in  group  discussion,  there  was  no  difficulty  in   responding  to  such  oral  syllogisms”  (p.186).  As  far  as  we  can  ascertain,  illiterate   populations  exhibit  patterns  of  reasoning  similar  to  those  observed  among  WEIRD   people,  including  the  boost  offered  by  reasoning  in  group,  even  on  abstract  tasks.    

Other  historians  and  anthropologists  have  claimed  that  there  was  a  “lack  of  

argumentation  and  debate  in  the  Far-­‐East”  (Becker,  1986,  see  also  Morrison,  1972;   Nakamura,  1964).  If,  by  lack  of  taste  or  ability  for  argumentation,  such  a  large  group   as  the  East-­‐Asian  people  failed  to  regularly  engage  in  argumentation,  this  would   surely  be  a  deadly  blow  to  the  theory.  This  is  not  the  case.  Contrary  to  what  has  been   suggested,  East-­‐Asian  languages  are  perfectly  able  to  express  logical  relationships   (Harbsmeier,  1998).  Many  strands  of  East-­‐Asian  culture  disparage  argumentation,   yet  this  is  not  true  of  all  of  them—indeed,  there  is  a  rich  tradition  of  Chinese  rhetoric   (e.g.  Lloyd,  2007).  And  even  those  who  pleaded  against  argumentation  did  it  with   forceful  arguments  (e.g.  Hansen,  1992).  Finally,  many  results  in  cross-­‐cultural   psychology  show  that  Easterners  and  Westerners  can  sometimes  exhibit  different   patterns  of  thought  (Nisbett,  Peng,  Choi,  &  Norenzayan,  2001).  Importantly,  these   results  do  not  demonstrate  differences  in  competence,  only  in  the  proclivity  to  rely   on  different  cognitive  mechanisms  (e.g.  Norenzayan,  Smith,  Kim,  &  Nisbett,  2002).   Moreover,  the  majority  of  these  experiments  are  conducted  in  individual  contexts,   and  the  argumentative  theory  does  not  predict  that  reasoning  has  to  be  activated  in   such  contexts.  It  is  therefore  not  surprising  to  observe  cross-­‐cultural  (or  inter-­‐ individual)  variations  in  the  tendency  to  rely  on  reasoning  in  individual  contexts.     11/  Developmental  data        

Properties  of  reasoning  relevant  for  the  present  purposes—those  that  are  predicted   by  the  argumentative  theory—seem  to  be  observable  across  all  cultures.  However,  it   could  still  be  that  people  all  the  world  over  learn  to  reason  in  such  a  way.  Reasoning   could  start  as  a  mostly  individual  process,  only  to  be  co-­‐opted  during  development   to  serve  more  argumentative  purposes.  Indeed,  several  authors  have  criticized  the   argumentative  theory  for  its  lack  of  attention  to  development  (Kuhn,  2011;   Moshman,  2011;  Narvaez,  2011).  It  is  true  that  an  integrated  theory  that  would  be   both  evolutionary  and  developmental  still  remains  to  be  spelled  out.  Still,  it  is   possible  to  search  among  the  patterns  of  reasoning  in  children  for  similarities  with   what  can  be  observed  in  adults.    

First,  it  should  be  stressed  that  children  display  argumentative  skills  from  

very  early  on.  As  soon  as  toddlers  start  to  form  sentences,  around  24  months  of  age,   they  produce  justifications  and  arguments  (e.g.  Dunn  &  Munn,  1987;  Kuczynski,   Kochanska,  Radke-­‐Yarrow,  &  Girnius-­‐Brown,  1987).  Children  also  display  a   confirmation  bias  when  they  form  arguments  (e.g.  Stein  &  Albro,  2001).   More  importantly  children  are  also  able  to  reap  the  benefits  of  reasoning  in   groups.  Indeed,  some  of  the  strongest  evidence  demonstrating  the  efficiency  of   reasoning  in  appropriate  group  settings  comes  from  developmental  and  educational   research.  The  effects  of  group  reasoning  for  children  have  mostly  been  studied   within  two  traditions.  The  first  is  a  neo-­‐Piagetian  research  program  that  emphasizes   the  value  of  socio-­‐cognitive  conflict.  This  research  has  demonstrated  that  when   children  have  to  discuss  a  cognitive  task  with  a  peer,  they  generally  outperform   children  solving  the  same  task  individually  (e.g.  Doise  &  Mugny,  1984;  Perret-­‐ Clermont,  1980).  The  second  tradition  is  that  of  collaborative  learning,  which  has   focused  on  gathering  data  from  long-­‐term  projects  in  school  settings.  In  the   experimental  condition,  students  solve  a  variety  of  problems  in  groups  rather  than   individually.  Reviewing  the  relevant  literature,  Slavin  remarked  that  “research  on   cooperative  learning  is  one  of  the  greatest  success  stories  in  the  history  of   educational  research”  (Slavin,  1996,  p.  43).     Interestingly,  the  argumentative  theory  also  explains  why  children  can   outperform  adults.  As  noted  earlier,  reasoning  sometimes  leads  to  worse  

decisions—mostly  because  of  motivated  reasoning  or  reason-­‐based  choice.  To  the   extent  that  children  reason  less  in  situations  in  which  reasoning  is  not  especially   warranted,  it  is  only  to  be  expected  that  they  should  avoid  some  mistakes.  Thus,   children  are  less  likely  to  commit  the  sunk  cost  fallacy  (Morsanyi  &  Handley,  2008),   they  discount  irrelevant  information  more  easily  (Klaczynski,  submitted)  and  they   are  less  sensitive  to  some  framing  effects  (Reyna  &  S.  C.  Ellis,  1994).  The  overall   similarity  in  the  patterns  of  reasoning  exhibited  by  children  and  adults  is  quite   striking,  and  reinforces  the  claims  of  the  argumentative  theory  (see,  for  an  extensive   review,  Mercier,  in  press-­‐a).     12/  Expertise     A  strong  argument  against  the  classical  view  of  reasoning  stems  from  the  poor   performance  of  reasoning  in  individual  reasoning  tasks.  It  could  be  argued,   however,  that  this  poor  performance  merely  reflects  a  lack  of  expertise.  Maybe  some   people—experts—are  better  able  to  make  use  of  reasoning  and  counteract  its  flaws   and  biases?  By  and  large,  that  does  not  seem  to  be  the  case.  Reasoning  has  the  same   traits  in  experts  and  laypeople  (for  review,  see  Mercier,  in  press-­‐c).  The  main   difference  may  be  in  the  quantity  of  arguments  people  can  muster.  The  problem,   however,  is  that  if  experts  are  as  biased  as  laypeople,  this  trove  of  arguments  is   liable  to  only  strengthen  the  effects  of  the  confirmation  bias.  Thus  people  who  are   more  knowledgeable  about  a  topic  tend  to  polarize  more  when  they  are  left  to   reason  on  their  own  (Tesser  &  Leone,  1977).  When  people  who  are  more   knowledgeable  are  asked  to  list  thoughts  on  a  political  issue,  not  only  do  they  list   more  arguments  supporting  their  side,  they  also  list  less  arguments  going  in  the   other  direction:  a  crowding  out  of  argument  amplifies  the  confirmation  bias  (Taber   &  Lodge,  2006).  Tetlock  (2005)  has  also  observed  that  the  extra  arguments  experts   are  able  to  muster  can  make  them  more  overconfident.    

Happily,  experts  also  benefit  from  reasoning  in  group.  Beyond  experimental  

results  (see  for  instance  Lombardelli,  Proudman,  &  Talbot,  2005),  group  discussions   are  central  in  business,  science,  law  and  politics,  as  explained  in  the  next  section.  

However,  the  same  conditions  apply  to  groups  of  experts  and  groups  of  laypeople:   debating  with  like-­‐minded  people  can  be  dangerous.  Indeed,  the  more   knowledgeable  the  group  members,  the  stronger  is  the  polarization  in  a  group  of   like-­‐minded  peers  (Vinokur  &  Burnstein,  1974).     13/  Outside  the  laboratory     There  is  a  large  literature  in  experimental  psychology  supporting  the  claims  of  the   argumentative  theory.  However,  some  may  question  the  ecological  validity  of  these   findings:  maybe  things  work  differently  outside  the  laboratory.  In  particular,  some   may  question  the  good  performance  of  reasoning  in  group:  maybe  external  factors   stop  groups  from  performing  well  outside  the  laboratory?  For  instance,  in  the  case   of  science,  people  often  have  a  representation  of  science  being  driven  by  lone   geniuses  rather  than  committees  (Shapin,  1991).  Yet  ethnographic  studies  have   recognized  the  lab  meeting  as  the  most  crucial  environment  for  fashioning  theories   and  experiments  (K.  Dunbar,  1995).  Groups  are  used  to  reach  felicitous  decisions  in   many  domains  outside  of  science.  Corporations  rely  on  teams  at  all  levels,  “from  the   shop  floor  to  the  executive  suite”  (Bainbridge,  2002;  Cohen  &  Bailey,  1997).  Indeed   it  could  be  argued  that  in  these  more  naturalistic  settings,  groups  are  even  apt  to   perform  better  than  in  the  lab,  as  people  get  to  know  each  other  and  to  acknowledge   each  other’s  strengths  and  weaknesses  (Michaelsen  et  al.,  1989).    

Group  reasoning  is  also  central  in  the  judicial  process.  The  adversarial  

system  can  be  seen  as  a  form  of  group  reasoning  that  tries  to  make  the  best  of  the   confirmation  bias  (see  Van  Koppen  &  Penrod,  2003).  More  straightforwardly,  juries   are  asked  to  deliberate,  not  to  vote,  and  deliberation  can  play  an  important  role,   allowing  a  verdict  initially  defended  by  a  minority  of  jurors  to  be  the  final  verdict  of   the  jury  (Kalven  &  Zeisel,  1966;  Sandys  &  Dillehay,  1995).  For  lack  of  a  benchmark,   it  is  difficult  to  gauge  the  efficiency  of  jury  reasoning,  but  some  studies  have  shown   that  the  verdict  can  track  relevant  properties  of  the  case  being  judged  (Sloan  &   Hsieh,  1990).  

 

In  politics,  reasoning  in  group  is  often  seen  through  the  prism  of  presidential  

or  congressional  debates.  There  are  reasons  why  such  debates  may  not  lead  to   optimal  outcomes:  the  participants  are  not  as  much  interacting  with  one  another  as   they  are  addressing  a  wider  audience  (Mercier,  in  press-­‐c).  By  contrast,  deliberative   democracy  stresses  the  participation  in  debates  by  every  citizen.  This  movement,   which  has  become  a  major  force  in  political  science,  was  originally  mostly  a   theoretical  exercise  regarding  the  best  way  to  form  opinions  (e.g.  Elster,  1998;   Habermas,  1987).  But  hundreds  of  field  experiments  have  now  been  conducted   demonstrating  the  potential  of  deliberation  among  citizens.  When  citizens  are   brought  together  to  deliberate,  for  instance  on  policy  issues,  they  often  end  up  with   more  informed  beliefs,  more  convincing  conclusions  and,  where  relevant,  more   compelling  policy  proposals  (e.g.  Barabas,  2000;  Fishkin  &  Luskin,  2005;  Gastil  &   Dillard,  1999).  The  groups  also  often  homogenize,  bringing  both  sides  of  the  political   spectrum  closer  together  (Luskin,  Fishkin,  &  Jowell,  2002).      

I  have  stressed  here  the  positive  effects  of  reasoning  in  group  outside  the  

laboratory,  because  they  may  be  more  surprising  than  the  pitfalls  of  reasoning.   However,  these  are  also  well  attested,  from  egregious  motivated  reasoning  in  the   judicial  system  (Braman,  2009)  to  group  polarization  and  groupthink  among  like-­‐ minded  politicians  (Janis,  1982).       14/  Conclusion     The  present  chapter  begins  by  noticing  how  different  sub-­‐fields  of  psychology  can   reach  results  that  are  at  odds  with  each  other.  Reasoning  is  a  case  in  point:  across   (and  sometimes  even  within)  disciplines,  opposite  conclusion  regarding  its   efficiency  can  be  found.  Some  hail  it  as  a  way  to  correct  mistaken  intuitions  while   others  stress  its  weaknesses  compared  to  intuitive  mechanisms.  I  suggested  that  the   adoption  of  an  evolutionary  perspective  could  help  solve  these  dilemmas  by   bringing  two  crucial  clarifications.  The  first  is  a  more  principled  way  to  carve  up  the   mind,  to  isolate  a  specific  cognitive  mechanism.  The  second  is  a  more  principled  way   to  ascribe  a  function  to  this  cognitive  mechanism.  In  the  case  in  hand,  Sperber  

(2000,  2001)  has  suggested  that  reasoning  should  be  thought  of  as  a  specific   metarepresentational  mechanism.  He  also  suggested  that  its  function  is   argumentative:  to  find  and  evaluate  reasons  in  dialogic  situations.  Based  on  this   hypothesis,  we  examined  findings  from  many  branches  of  psychology  to  see  if  they   could  be  better  accounted  for  from  this  perspective.  This  chapter  has  reviewed  the   evidence  accumulated  so  far,  showing  how  the  argumentative  theory  can  explain  a   wealth  of  findings  in  reasoning,  decision  making,  social  psychology  and  other  areas   of  psychology.    

Evolutionary  hypotheses  can  naturally  lead  to  reviewing  evidence  that  spans  

outside  any  given  sub-­‐field  of  psychology.  Because  evolutionary  psychologists   usually  claim  that  the  traits  under  study  should  be  universal,  they  often  pay   attention  to  cross-­‐cultural  variation  (or  lack  thereof)  (e.g.  Buss,  1989;  Sugiyama,   Tooby,  &  Cosmides,  2002).  Because  evolutionary  psychologists  must  claim  that   some  relevant  traits  do  not  emerge  purely  from  development,  they  often  pay   attention  to  the  amount  of  learning  required  to  acquire  a  trait  (e.g.  D.  F.  Bjorklund  &   Pellegrini,  2002).    Finally,  because  evolutionary  psychologists  make  predictions   about  fitness  enhancing  traits,  they  are  also  inclined  to  look  beyond  the  laboratory,   to  ‘real  life’  behaviors  (e.g.  Daly  &  M.  Wilson,  1988;  Thiessen,  Young,  &  Burroughs,   1993).  The  central  hypothesis  of  the  argumentative  theory  of  reasoning  has  also   been  checked  against  evidence  from  all  these  domains.  Broad  trends  across   disciplines  that  had  previously  gone  mostly  unnoticed  can  now  be  brought  to  the   fore.  It  is  only  to  be  hoped  that  the  theory  may  facilitate  further  cross-­‐disciplinary   dialogue.       Bainbridge,  S.  M.  (2002).  Why  a  Board?  Group  Decisionmaking  in  Corporate   Governance.  Vanderbilt  Law  Review,  55,  1–55.   Balci,  F.,  Freestone,  D.,  &  Gallistel,  C.  R.  (2009).  Risk  assessment  in  man  and  mouse.   PNAS,  106(7),  2459–2463.   Bandura,  A.,  Barbaranelli,  C.,  Caprara,  G.  V.,  &  Pastorelli,  C.  (1996).  Mechanisms  of   moral  disengagement  in  the  exercise  of  moral  agency.  Journal  of  Personality   and  Social  Psychology,  71,  364-­‐374.  

Barabas,  J.  (2000).  Uncertainty  and  ambivalence  in  deliberative  opinion  models:   Citizens  in  the  Americans  Discuss  Social  Security  Forum.  Annual  Meeting  of   the  Midwest  Political  Science  Association.   Barkow,  J.  H.,  Cosmides,  L.,  &  Tooby,  J.  (1992).  The  Adapted  Mind.  Oxford:  Oxford   University  Press.   Becker,  C.  B.  (1986).  Reasons  for  the  lack  of  argumentation  and  debate  in  the  Far   East.  International  Journal  of  Intercultural  Relations,  10(1),  75-­‐92.   Billig,  M.  (1996).  Arguing  and  Thinking:  A  Rhetorical  Approach  to  Social  Psychology.   Cambridge:  Cambridge  University  Press.   Bjorklund,  D.  F.,  &  Pellegrini,  A.  D.  (2002).  The  origins  of  human  nature:  Evolutionary   developmental  psychology.  Washington,  DC:  American  Psychological   Association.   Bloom,  P.  (2010).  How  do  morals  change?  Nature,  464(7288),  490.   Bornstein,  G.,  &  Yaniv,  I.  (1998).  Individual  and  group  behavior  in  the  ultimatum   game:  Are  groups  more  “rational”  players?  Experimental  Economics,  1(1),   101–108.   Braman,  E.  (2009).  Law,  Politics,  and  Perception:  How  Policy  Preferences  Influence   Legal  Reasoning.  Charlotte:  University  of  Virginia  Press.   Buchtel,  E.  E.,  &  Norenzayan,  A.  (2009).  Thinking  across  cultures:  Implications  for   dual  processes.  In  J.  S.  B.  T.  Evans  &  K.  Frankish  (Eds.),  In  Two  Minds.  New   York:  Oxford  University  Press.   Buss,  D.  M.  (1989).  Sex  differences  in  human  mate  preferences:  Evolutionary   hypotheses  tested  in  37  cultures.  Behavioural  and  Brain  Sciences,  12,  1-­‐49.   Byrne,  R.  W.,  &  Whiten,  A.  (1988).  Machiavellian  Intelligence:  Social  Expertise  and  the   Evolution  of  Intellect  in  Monkeys,  Apes,  and  Humans.  New  York:  Oxford   University  Press.   Chance,  Z.,  &  Norton,  M.  I.  (2008).  I  Read  Playboy  for  the  Articles.  In  M.  S.  McGlone  &   M.  L.  Knapp  (Eds.),  The  Interplay  of  Truth  and  Deception:  New  Agendas  in   Theory  and  Research.  Routledge.  

Cohen,  S.  G.,  &  Bailey,  D.  E.  (1997).  What  makes  teams  work:  Group  effectiveness   research  from  the  shop  floor  to  the  executive  suite.  Journal  of  management,   23(3),  239–290.   Cole,  M.  (1971).  The  Cultural  Context  of  Learning  and  Thinking:  An  Exploration  in   Experimental  Anthropology.  Methuen.   Cole,  M.,  Gay,  J.,  Glick,  J.  A.,  &  Sharp,  D.  W.  (1971).  The  cultural  context  of  learning  and   thinking.  New  York:  Basic  Books.   Cowley,  M.,  &  Byrne,  R.  M.  J.  (2005).  When  falsification  is  the  only  path  to  truth.  In  B.   G.  Bara,  L.  Barsalou,  &  M.  Bucciarelli  (Eds.),    (pp.  250-­‐255).  Stresa,  Italy:   Mahwah,  NJ:  Erlbaum.   Daly,  M.,  &  Wilson,  M.  (1988).  Homicide.  New  York:  Aldine  de  Gruyter.   Dawkins,  R.,  &  Krebs,  J.  R.  (1978).  Animal  signals:  Information  or  manipulation?  In  J.   R.  Krebs  &  N.  B.  Davies  (Eds.),  Behavioural  Ecology:  An  Evolutionary  Approach   (pp.  282-­‐309).  Oxford:  Basil  Blackwell  Scientific  Publications.   Dawson,  E.,  Gilovich,  T.,  &  Regan,  D.  T.  (2002).  Motivated  reasoning  and   performance  on  the  Wason  selection  task.  Personality  and  Social  Psychology   Bulletin,  28(10),  1379.   Denes-­‐Raj,  V.,  &  Epstein,  S.  (1994).  Conflict  between  intuitive  and  rational   processing:  when  people  behave  against  their  better  judgment.  Journal  of   Personality  and  Social  Psychology,  66(5),  819-­‐829.   Dessalles,  J.-­‐L.  (2007).  Why  We  Talk:  The  Evolutionary  Origins  of  Language.   Cambridge:  Oxford  University  Press.   Dias,  M.,  Roazzi,  A.,  &  Harris,  P.  L.  (2005).  Reasoning  from  unfamiliar  premises:  A   study  with  unschooled  adults.  Psychological  Science,  16(7),  550-­‐554.   Dijksterhuis,  A.  (2004).  Think  different:  the  merits  of  unconscious  thought  in   preference  development  and  decision  making.  Journal  of  Personality  and   Social  Psychology,  87(5),  586-­‐98.   Doise,  W.,  &  Mugny,  G.  (1984).  The  Social  Development  of  the  Intellect.  Oxford:   Pergamon  Press.   Dubreuil,  B.  (2010).  Paleolithic  public  goods  games:  why  human  culture  and   cooperation  did  not  evolve  in  one  step.  Biology  and  Philosophy,  25(1),  53–73.  

Dunbar,  K.  (1995).  How  scientists  really  reason:  Scientific  reasoning  in  real-­‐world   laboratories.  The  nature  of  insight,  396.   Dunbar,  R.  I.  M.  (1996).  The  social  brain  hypothesis.  Evolutionary  Anthropology,  6,   178–190.   Dunn,  J.,  &  Munn,  P.  (1987).  Development  of  justification  in  disputes  with  mother   and  sibling.  Developmental  Psychology,  23,  791-­‐798.   Elster,  J.  (1998).  Deliberative  Democracy.  Cambridge:  Cambridge  University  Press.   Evans,  J.  S.  B.  T.  (2011).  Reasoning  is  for  thinking,  not  just  for  arguing.  Behavioral   and  Brain  Sciences,  34(02),  77–78.   Evans,  J.  S.  B.  T.  (1996).  Deciding  before  you  think:  Relevance  and  reasoning  in  the   selection  task.  British  Journal  of  Psychology,  87,  223-­‐240.   Evans,  J.  S.  B.  T.  (2003).  In  two  minds:  dual-­‐process  accounts  of  reasoning.  Trends  in   Cognitive  Sciences,  7(10),  454-­‐459.   Evans,  J.  S.  B.  T.  (2006).  The  heuristic-­‐analytic  theory  of  reasoning:  Extension  and   evaluation.  Psychonomic  Bulletin  and  Review,  13(3),  378-­‐395.   Evans,  J.  S.  B.  T.  (2008).  Dual-­‐processing  accounts  of  reasoning,  judgment  and  social   cognition.  Annual  Review  of  Psychology,  59,  255-­‐278.   Evans,  J.  S.  B.  T.,  &  Over,  D.  E.  (1996).  Rationality  and  Reasoning.  Hove:  Psychology   Press.   Fishkin,  J.  S.,  &  Luskin,  R.  C.  (2005).  Experimenting  with  a  democratic  ideal:   Deliberative  polling  and  public  opinion.  Acta  Politica,  40(3),  284–298.   Fodor,  J.  (1983).  The  Modularity  of  Mind.  Cambridge,  Massachusetts:  MIT  Press.   Frankish,  K.  (2011).  Reasoning,  argumentation,  and  cognition.  Behavioral  and  Brain   Sciences,  34(02),  79–80.   Gastil,  J.,  &  Dillard,  J.  P.  (1999).  Increasing  political  sophistication  through  public   deliberation.  Political  Communication,  16(1),  3–23.   Gibbard,  A.  (1990).  Wise  Choices,  Apt  Feelings.  Cambridge:  Cambridge  University   Press.   Gilbert,  D.  T.,  Pelham,  B.  W.,  &  Krull,  D.  S.  (1988).  On  cognitive  busyness:  When   person  perceivers  meet  persons  perceived.  Journal  of  Personality  and  Social   Psychology,  54(5),  733-­‐740.  

Gilovich,  T.,  Griffin,  D.,  &  Kahneman,  D.  (2002).  Heuristics  and  Biases:  The  Psychology   of  Intuitive  Judgment.  Cambridge,  UK:  Cambridge  University  Press.   Godfrey-­‐Smith,  P.,  &  Yegnashankaran,  K.  (2011).  Reasoning  as  deliberative  in   function  but  dialogic  in  structure  and  origin.  Behavioral  and  Brain  Sciences,   34(02),  80–80.   Guenther,  C.  L.,  &  Alicke,  M.  D.  (2008).  Self-­‐enhancement  and  belief  perseverance.   Journal  of  Experimental  Social  Psychology,  44(3),  706-­‐712.   Habermas,  J.  (1987).  A  Theory  of  Communicative  Action.  Boston:  Beacon  Press.   Haidt,  J.  (2001).  The  emotional  dog  and  its  rational  tail:  A  social  intuitionist   approach  to  moral  judgment.  Psychological  Review,  108(4),  814-­‐834.   Haidt,  J.,  &  Bjorklund,  F.  (2007).  Social  intuitionists  reason,  in  conversation.  In  W.   Sinnott-­‐Armstrong  (Ed.),  Moral  Psychology  (pp.  241-­‐254).  Cambridge,  MA:   MIT  Press.   Hansen,  C.  (1992).  A  Daoist  Theory  of  Chinese  Thought.  New  York:  Oxford  University   Press.   Harbsmeier,  C.  (1998).  Language  and  logic.  Science  and  Civilisation  in  China.   Cambridge,  MA:  Cambridge  University  Press.   Harrell,  M.  (2011).  Understanding,  evaluating,  and  producing  arguments:  Training  is   necessary  for  reasoning  skills.  Behavioral  and  Brain  Sciences,  34(02),  80–81.   Henrich,  J.,  Heine,  S.  J.,  &  Norenzayan,  A.  (2010).  The  weirdest  people  in  the  world.   Behavioral  and  Brain  Sciences,  33(2-­‐3),  61-­‐83.   Hill,  G.  W.  (1982).  Group  versus  individual  performance:  Are  N  +  1  heads  better  than   one?  Psychological  Bulletin,  91,  517-­‐539.   Hrdy,  S.  B.  (2009).  Mothers  and  Others.  Cambridge,  MA:  Belknap  Press.   Huber,  J.,  Payne,  J.  W.,  &  Puto,  C.  (1982).  Adding  Asymmetrically  Dominated   Alternatives:  Violations  of  Regularity  and  the  Similarity  Hypothesis.  The   Journal  of  Consumer  Research,  9(1),  90-­‐98.   Humphrey,  N.  K.  (1976).  The  social  function  of  Intellect.  In  P.  P.  G.  Bateson  &  R.  A.   Hinde  (Eds.),  Growing  Points  in  Ethology  (pp.  303-­‐317).  Cambrige,   Massachusetts:  Cambridge  University  Press.  

Isenberg,  D.  J.  (1986).  Group  polarization:  A  critical  review  and  meta-­‐analysis.   Journal  of  Personality  and  Social  Psychology,  50(6),  1141-­‐1151.   Janis,  I.  L.  (1982).  Groupthink  (Vol.  2).  Boston:  Houghton  Mifflin.   Johnson-­‐Laird,  P.  N.  (2006).  How  We  Reason.  Oxford:  Oxford  University  Press.   Johnson-­‐Laird,  P.  N.,  &  Byrne,  R.  M.  J.  (2002).  Conditionals:  A  theory  of  meaning,   pragmatics,  and  inference.  Psychological  Review,  109,  646-­‐678.   Kahneman,  D.  (2003).  A  perspective  on  judgment  and  choice:  Mapping  bounded   rationality.  American  Psychologist,  58(9),  697-­‐720.   Kalven,  H.,  &  Zeisel,  H.  (1966).  The  American  Jury.  Chicago:  University  of  Chicago   Press.   Klaczynski,  P.  A.  (2000).  Motivated  scientific  reasoning  biases,  epistemological   beliefs,  and  theory  polarization:  A  two-­‐process  approach  to  adolescent   cognition.  Child  Development,  1347–1366.   Klaczynski,  P.  A.  (submitted).  When  (and  when  not)  to  make  exceptions:  Links   among  age,  precedent  setting  decisions,  conditional  inferences,  and   argument  evaluation.   Klayman,  J.,  &  Ha,  Y.  (1987).  Confirmation,  disconfirmation,  and  information  in   hypothesis  testing.  Psychological  Review,  94,  211-­‐228.   Kohlberg,  L.  (1987).  The  psychology  of  moral  development.  San  Francisco:  Harper  &   Row.   Van  Koppen,  P.  J.,  &  Penrod,  S.  (2003).  Adversarial  versus  inquisitorial  justice:   Psychological  perspectives  on  criminal  justice  systems.  New  York:  Springer  Us.   Koriat,  A.,  Lichtenstein,  S.,  &  Fischhoff,  B.  (1980).  Reasons  for  confidence.  Journal  of   Experimental  Psychology:  Human  Learning  and  Memory  and  Cognition,  6,  107-­‐ 118.   Krebs,  J.  R.,  &  Dawkins,  R.  (1984).  Animal  signals:  Mind-­‐reading  and  manipulation?   In  J.  R.  Krebs  &  N.  B.  Davies  (Eds.),  Behavioural  Ecology:  An  Evolutionary   Approach  (Vol.  2,  pp.  390-­‐402).  Oxford:  Basil  Blackwell  Scientific   Publications.   Kruglanski,  A.  W.,  &  Freund,  T.  (1983).  The  freezing  and  unfreezing  of  lay-­‐ inferences:  Effects  on  impressional  primacy,  ethnic  stereotyping,  and  

numerical  anchoring.  Journal  of  Experimental  Social  Psychology,  19(5),  448-­‐ 468.   Kuczynski,  L.,  Kochanska,  G.,  Radke-­‐Yarrow,  M.,  &  Girnius-­‐Brown,  O.  (1987).  A   developmental  interpretation  of  young  children’s  noncompliance.   Developmental  Psychology,  23(6),  799–806.   Kuhn,  D.  (1991).  The  Skills  of  Arguments.  Cambridge:  Cambridge  University  Press.   Kuhn,  D.  (2005).  Education  for  Thinking.  Harvard,  MA:  Harvard  University  Press.   Kuhn,  D.  (2009).  Adolescent  thinking.  In  R.  M.  Lerner  &  L.  Steinberg  (Eds.),   Handbook  of  adolescent  psychology  (3rd  ed.,  Vol.  1,  pp.  152–186).  Hoboken,   NJ:  Wiley.   Kuhn,  D.  (2011).  What  people  may  do  versus  can  do.  Behavioral  and  Brain  Sciences,   34(02),  83–83.   Kuhn,  D.,  &  Crowell,  A.  (2011).  Dialogic  Argumentation  as  a  Vehicle  for  Developing   Young  Adolescents’  Thinking.  Psychological  Science,  22(4),  545.   Kuhn,  D.,  Shaw,  V.  F.,  &  Felton,  M.  (1997).  Effects  of  dyadic  interaction  on   argumentative  reasoning.  Cognition  and  Instruction,  15,  287-­‐315.   Kunda,  Z.  (1990).  The  case  for  motivated  reasoning.  Psychological  Bulletin,  108,  480-­‐ 498.   Laughlin,  P.  R.,  &  Ellis,  A.  L.  (1986).  Demonstrability  and  social  combination   processes  on  mathematical  intellective  tasks.  Journal  of  Experimental  Social   Psychology,  22,  177–189.   Laughlin,  P.  R.,  Bonner,  B.  L.,  &  Miner,  A.  G.  (2002).  Groups  perform  better  than  the   best  individuals  on  letters-­‐to-­‐numbers  problems.  Organizational  Behavior   and  Human  Decision  Processes,  88,  605-­‐620.   Laughlin,  P.  R.,  VanderStoep,  S.  W.,  &  Hollingshead,  A.  B.  (1991).  Collective  versus   individual  induction:  Recognition  of  truth,  rejection  of  error,  and  collective   information  processing.  Journal  of  Personality  and  Social  Psychology,  61,  50-­‐ 67.   Lerner,  J.  S.,  &  Tetlock,  P.  E.  (1999).  Accounting  for  the  effects  of  accountability.   Psychological  Bulletin,  125,  255-­‐275.  

Lloyd,  G.  E.  R.  (2007).  Towards  a  taxonomy  of  controversies  and  controversiality:   Ancient  Greece  and  China.  In  M.  Dascal  &  H.  Chang  (Eds.),  Traditions  of   Controversy  (pp.  3-­‐16).  Amsterdam:  John  Benjamins  Publishing  Company.   Lombardelli,  C.,  Proudman,  J.,  &  Talbot,  J.  (2005).  Committees  versus  individuals:  An   experimental  analysis  of  monetary  policy  decision-­‐making.  International   Journal  of  Central  Banking,  May,  181-­‐205.   Lord,  C.  G.,  Lepper,  M.  R.,  &  Preston,  E.  (1984).  Considering  the  opposite:  A   corrective  strategy  for  social  judgment.  Journal  of  Personality  and  Social   Psychology,  47,  1231-­‐1243.   Luria,  A.  R.  (1934).  The  second  psychological  expedition  to  central  Asia.  Journal  of   Genetic  Psychology,  41,  255–259.   Luria,  A.  R.  (1976).  Cognitive  Development  its  Cultural  and  Social  Foundations.   Cambridge,  MA:  Harvard  University  Press.   Luskin,  R.  C.,  Fishkin,  J.  S.,  &  Jowell,  R.  (2002).  Considered  opinions:  Deliberative   polling  in  Britain.  British  Journal  of  Political  Science,  32(03),  455–487.   Mansbridge,  J.  (1999).  Everyday  talk  in  the  deliberative  system.  In  S.  Macedo  (Ed.),   Deliberative  politics:  Essays  on  democracy  and  disagreement  (pp.  211–42).   New  York:  Oxford  University  Press.   Marr,  D.  (1982).  Vision:  A  Computational  Investigation  into  the  Human   Representation  and  Processing  of  Visual  Information.  San  Francisco:  Freeman.   Maynard  Smith,  J.,  &  Harper,  D.  (2003).  Animal  Signals.  Oxford:  Oxford  University   Press.   Mercier,  H.  (submitted).  Looking  for  arguments.   Mercier,  H.  (in  press-­‐a).  Reasoning  serves  argumentation  in  children.  Cognitive   Development.   Mercier,  H.  (in  press-­‐b).  What  good  is  moral  reasoning?  Mind  &  Society.   Mercier,  H.  (in  press-­‐c).  When  experts  argue:  explaining  the  best  and  the  worst  of   reasoning.  Argumentation.   Mercier,  H.  (2011).  On  the  universality  of  argumentative  reasoning.  Journal  of   Cognition  and  Culture,  11,  85-­‐113.  

Mercier,  H.,  &  Landemore,  H.  (in  press).  Reasoning  is  for  arguing:  Understanding  the   successes  and  failures  of  deliberation.  Political  Psychology.   Mercier,  H.,  &  Sperber,  D.  (2009).  Intuitive  and  reflective  inferences.  In  J.  S.  B.  T.   Evans  &  K.  Frankish  (Eds.),  In  Two  Minds.  New  York:  Oxford  University  Press.   Mercier,  H.,  &  Sperber,  D.  (2011a).  Argumentation:  its  adaptiveness  and  efficacy.   Behavioral  and  Brain  Sciences,  34(2),  94-­‐111.   Mercier,  H.,  &  Sperber,  D.  (2011b).  Why  do  humans  reason?  Arguments  for  an   argumentative  theory.  Behavioral  and  Brain  Sciences,  34(2),  57-­‐74.   Michaelsen,  L.  K.,  Watson,  W.  E.,  &  Black,  R.  H.  (1989).  A  realistic  test  of  individual   versus  group  consensus  decision  making.  Journal  of  Applied  Psychology,   74(5),  834-­‐839.   Millikan,  R.  G.  (1987).  Language,  Though  and  Other  Categories.  Cambridge:  MIT   press.   Morrison,  J.  (1972).  The  absence  of  a  rhetorical  tradition  in  Japanese  culture.   Western  Speech,  36,  89-­‐102.   Morsanyi,  K.,  &  Handley,  S.  J.  (2008).  How  smart  do  you  need  to  be  to  get  it  wrong?   The  role  of  cognitive  capacity  in  the  development  of  heuristic-­‐based   judgment.  Journal  of  Experimental  Child  Psychology,  99(1),  18-­‐36.   Moshman,  D.  (2011).  Evolution  and  development  of  reasoning  and  argumentation:   Comment  on  Mercier  (2011).  Cognitive  Development.   Moshman,  D.,  &  Geil,  M.  (1998).  Collaborative  reasoning:  Evidence  for  collective   rationality.  Thinking  and  Reasoning,  4(3),  231-­‐248.   Nakamura,  H.  (1964).  Ways  of  Thinking  of  Eastern  Peoples:  India,  China,  Tibet,  Japan.   Hawaii:  University  of  Hawaii  Press.   Narvaez,  D.  (2011).  The  world  looks  small  when  you  only  look  through  a  telescope:   The  need  for  a  broad  and  developmental  study  of  reasoning.  Behavioral  and   Brain  Sciences,  34(02),  83–84.   Nickerson,  R.  S.  (1998).  Confirmation  bias:  A  ubiquitous  phenomena  in  many  guises.   Review  of  General  Psychology,  2,  175-­‐220.  

Nisbett,  R.  E.,  Peng,  K.,  Choi,  I.,  &  Norenzayan,  A.  (2001).  Culture  and  systems  of   thought:  Holistic  versus  analytic  cognition.  Psychological  Review,  108(2),  291-­‐ 310.   Norenzayan,  A.,  &  Heine,  S.  J.  (2005).  Psychological  universals:  What  are  they  and   how  can  we  know.  Psychological  Bulletin,  131(5),  763-­‐784.   Norenzayan,  A.,  Smith,  E.  E.,  Kim,  B.  J.,  &  Nisbett,  R.  E.  (2002).  Cultural  preferences   for  formal  versus  intuitive  reasoning.  Cognitive  Science,  26(5),  653–684.   Oaksford,  M.,  &  Chater,  N.  (2001).  The  probabilistic  approach  to  human  reasoning.   Trends  in  Cognitive  Sciences,  5(8),  349-­‐357.   Oaksford,  M.,  Chater,  N.,  &  Grainger,  R.  (1999).  Probabilistic  effects  in  data  selection.   Thinking  and  Reasoning,  5,  193-­‐243.   Opfer,  J.  E.,  &  Sloutsky,  V.  (2011).  On  the  design  and  function  of  rational  arguments.   Behavioral  and  Brain  Sciences,  34(02),  85–86.   Paxton,  J.  M.,  &  Greene,  J.  D.  (2010).  Moral  Reasoning:  Hints  and  Allegations.  Topics   in  Cognitive  Science,  2(3),  511–527.   Paxton,  J.  M.,  Ungar,  L.,  &  Greene,  J.  D.  (in  press).  Reflection  and  reasoning  in  moral   judgment.  Cognitive  Science.   Perkins,  D.  N.  (1985).  Postprimary  education  has  little  impact  on  informal  reasoning.   Journal  of  Educational  Psychology,  77,  562-­‐571.   Perret-­‐Clermont,  A.-­‐N.  (1980).  Social  Interaction  and  Cognitive  Development  in   Children.  London:  Academic  Press.   Petty,  R.  E.,  &  Wegener,  D.  T.  (1998).  Attitude  change:  Multiple  roles  for  persuasion   variables.  In  D.  Gilbert,  S.  Fiske,  &  G.  Lindzey  (Eds.),  The  Handbook  of  Social   Psychology  (pp.  323–390).  Boston:  McGraw-­‐Hill.   Piaget,  J.  (1997).  The  moral  judgment  of  the  child.  New  York:  Free  Press.   Poletiek,  F.  H.  (1996).  Paradoxes  of  falsification.  Quarterly  Journal  of  Experimental   Psychology,  49A,  447-­‐462.   Resnick,  L.  B.,  Salmon,  M.,  Zeitz,  C.  M.,  Wathen,  S.  H.,  &  Holowchak,  M.  (1993).   Reasoning  in  conversation.  Cognition  and  Instruction,  11(3/4),  347-­‐364.   Reyna,  V.  F.,  &  Ellis,  S.  C.  (1994).  Fuzzy-­‐trace  theory  and  framing  effects  in  children’s   risky  decision  making.  Psychological  Science,  5(5),  275–279.  

Rips,  L.  J.  (1994).  The  Psychology  of  Proof:  Deductive  Reasoning  in  Human  Thinking.   Cambridge,  MA:  MIT  Press.   Roberts,  M.  J.,  &  Newton,  E.  J.  (2001).  Inspection  times,  the  change  task,  and  the   rapid  response  selection  task.  Quarterly  Journal  of  Experimental  Psychology,   54,  1031-­‐1048.   Ross,  L.,  Lepper,  M.  R.,  &  Hubbard,  M.  (1975).  Perseverance  in  Self-­‐Perception  and   Social  Perception:  Biased  Attributional  Processes  in  the  Debriefing  Paradigm.   Journal  of  Personality  and  Social  Psychology,  32(5),  880-­‐802.   Sacco,  K.,  &  Bucciarelli,  M.  (2008).  The  role  of  cognitive  and  socio-­‐cognitive  conflict   in  learning  to  reason.  Mind  &  Society,  7(1),  1-­‐19.   Sandys,  M.,  &  Dillehay,  C.  (1995).  First-­‐ballot  votes,  predeliberation  dispositions,  and   final  verdicts  in  jury  trials.  Law  and  Human  Behavior,  19(2),  175–195.   Schkade,  D.,  Sunstein,  C.  R.,  &  Kahneman,  D.  (2000).  Deliberating  about  dollars:  The   severity  shift.  Columbia  Law  Review,  100,  1139–1176.   Shapin,  S.  (1991).  “The  mind  is  its  own  place”:  science  and  solitude  in  seventeenth-­‐ century  England.  Science  in  Context,  4(01),  191–218.   Simonson,  I.  (1989).  Choice  based  on  reasons:  The  case  of  attraction  and   compromise  effects.  The  Journal  of  Consumer  Research,  16(2),  158-­‐174.   Slavin,  R.  E.  (1996).  Research  on  cooperative  learning  and  achievement:  What  we   know,  what  we  need  to  know.  Contemporary  educational  psychology,  21(1),   43–69.   Sloan,  F.  A.,  &  Hsieh,  C.  R.  (1990).  Variability  in  Medical  Malpractice  Payments:  Is  the   Compensation  Fair?  Law  and  Society  Review,  24(4),  997–1039.   Sniezek,  J.  A.,  &  Henry,  R.  A.  (1989).  Accuracy  and  confidence  in  group  judgment.   Organizational  behavior  and  human  decision  processes,  43(1),  1-­‐28.   Spellman,  B.  A.  (1993).  Implicit  learning  of  base  rates.  Psycoloquy,  4,  61.   Sperber,  D.  (1994).  The  modularity  of  thought  and  the  epidemiology  of   representations.  In  L.  A.  Hirschfeld  &  S.  A.  Gelman  (Eds.),  Mapping  the  Mind:   Domain  Specificity  in  Cognition  and  Culture  (pp.  39-­‐67).  Cambridge:   Cambridge  University  Press.  

Sperber,  D.  (2000).  Metarepresentations  in  an  evolutionary  perspective.  In  D.   Sperber  (Ed.),  Metarepresentations:  A  Multidisciplinary  Perspective  (pp.  117-­‐ 137).  Oxford:  Oxford  University  Press.   Sperber,  D.  (2001).  An  evolutionary  perspective  on  testimony  and  argumentation.   Philosophical  Topics,  29,  401-­‐413.   Sperber,  D.,  Cara,  F.,  &  Girotto,  V.  (1995).  Relevance  theory  explains  the  selection   task.  Cognition,  57,  31-­‐95.   Sperber,  D.,  Clément,  F.,  Heintz,  C.,  Mascaro,  O.,  Mercier,  H.,  Origgi,  G.,  &  Wilson,  D.   (2010).  Epistemic  vigilance,  25(4),  359-­‐393.   Stanovich,  K.  E.  (2004).  The  Robot’s  Rebellion.  Chicago:  Chicago  University  Press.   Stanovich,  K.  E.,  &  West,  R.  F.  (1999).  Discrepancies  between  normative  and   descriptive  models  of  decision  making  and  the  understanding/acceptance   principle.  Cognitive  Psychology,  38(3),  349-­‐385.   Stein,  N.  L.,  &  Albro,  E.  R.  (2001).  The  origins  and  nature  of  arguments:  Studies  in   conflict  understanding,  emotion,  and  negotiation.  Discourse  Processes,   32(2&3),  113-­‐33.   Sterelny,  K.  (In  press).  The  Evolved  Apprentice.  Cambridge,  MA:  MIT  Press.   Sugiyama,  L.  S.,  Tooby,  J.,  &  Cosmides,  L.  (2002).  Cross-­‐cultural  evidence  of  cognitive   adaptations  for  social  exchange  among  the  Shiwiar  of  Ecuadorian  Amazonia.   Proc  Natl  Acad  Sci  U  S  A,  99(17),  11537-­‐42.   Sunstein,  C.  R.  (2002).  The  law  of  group  polarization.  Journal  of  Political  Philosophy,   10(2),  175-­‐195.   Taber,  C.  S.,  &  Lodge,  M.  (2006).  Motivated  skepticism  in  the  evaluation  of  political   beliefs.  American  Journal  of  Political  Science,  50(3),  755-­‐769.   Tesser,  A.  (1978).  Self-­‐generated  attitude  change.  In  L.  Berkowitz  (Ed.),  Advances  in   Experimental  Social  Psychology  (pp.  289-­‐338).  New  York:  Academic  Press.   Tesser,  A.,  &  Leone,  C.  (1977).  Cognitive  schemas  and  thought  as  determinants  of   attitude  change.  Journal  of  Experimental  Social  Psychology,  13(4),  340-­‐56.   Tetlock,  P.  E.  (2005).  Expert  political  judgment:  How  good  is  it?  How  can  we  know?   Princeton:  Princeton  University  Press.  

Thiessen,  D.,  Young,  R.  K.,  &  Burroughs,  R.  (1993).  Lonely  hearts  advertisements   reflect  sexually  dimorphic  mating  strategies.  Ethology  and  Sociobiology,   14(3),  209–229.   Thompson,  D.  V.,  &  Norton,  M.  I.  (2008).  The  social  utility  of  feature  creep.  In  A.  Lee   &  D.  Soman  (Eds.),  Advances  in  Consumer  Research  (pp.  181-­‐184).  Duluth,   MN:  Association  for  Consumer  Research.   Thompson,  D.  V.,  Hamilton,  R.  W.,  &  Rust,  R.  T.  (2005).  Feature  fatigue:  When   product  capabilities  become  too  much  of  a  good  thing.  Journal  of  Marketing   Research,  42(4),  431-­‐442.   Tomasello,  M.,  Carpenter,  M.,  Call,  J.,  Behne,  T.,  &  Moll,  H.  (2005).  Understanding  and   sharing  intentions:  The  origins  of  cultural  cognition.  Behavioral  and  Brain   Sciences,  28(5),  675-­‐691.   Trognon,  A.  (1993).  How  does  the  process  of  interaction  work  when  two   interlocutors  try  to  resolve  a  logical  problem?  Cognition  and  Instruction,   11(3&4),  325-­‐345.   Trommershauser,  J.,  Maloney,  L.  T.,  &  Landy,  M.  S.  (2008).  Decision  making,   movement  planning  and  statistical  decision  theory.  Trends  in  Cognitive   Sciences,  12(8),  291-­‐297.   Uhlmann,  E.  L.,  Pizarro,  D.  A.,  Tannenbaum,  D.,  &  Ditto,  P.  H.  (2009).  The  motivated   use  of  moral  principles.  Judgment  and  Decision  Making,  4(6),  476–491.   Valdesolo,  P.,  &  DeSteno,  D.  (2008).  The  duality  of  virtue:  Deconstructing  the  moral   hypocrite.  Journal  of  Experimental  Social  Psychology.   Vinokur,  A.,  &  Burnstein,  E.  (1974).  Effects  of  partially  shared  persuasive  arguments   on  group-­‐induced  shifts:  A  group-­‐problem-­‐solving  approach.  Journal  of   Personality  and  Social  Psychology,  29(3),  305–315.   Vinokur,  A.,  &  Burnstein,  E.  (1978).  Depolarization  of  attitudes  in  groups.  Journal  of   Personality  and  Social  Psychology,  36(8),  872-­‐885.   Vohs,  K.  D.,  &  Schooler,  J.  W.  (2008).  The  value  of  believing  in  free  will.  Psychological   Science,  19(1),  49.   Wason,  P.  C.  (1960).  On  the  failure  to  eliminate  hypotheses  in  a  conceptual  task.   Quarterly  Journal  of  Experimental  Psychology,  12,  129-­‐137.  

Wason,  P.  C.  (1966).  Reasoning.  In  B.  M.  Foss  (Ed.),  New  Horizons  in  Psychology:  I   (pp.  106–137).  Harmandsworth,  England:  Penguin.   Whiten,  A.,  &  Byrne,  R.  W.  (1997).  Machiavellian  Intelligence  II:  Extensions  and   Evaluations.  Cambridge:  Cambridge  University  Press.   Wilson,  T.  D.,  &  Schooler,  J.  W.  (1991).  Thinking  too  much:  Introspection  can  reduce   the  quality  of  preferences  and  decisions.  Thinking,  60(2),  181-­‐192.   Wilson,  T.  D.,  Lisle,  D.  J.,  Schooler,  J.  W.,  Hodges,  S.  D.,  Klaaren,  K.  J.,  &  LaFleur,  S.  J.   (1993).  Introspecting  about  reasons  can  reduce  post-­‐choice  satisfaction.   Personality  and  Social  Psychology  Bulletin,  19(3),  331.   Wolfe,  C.  R.  (2011).  Some  empirical  qualifications  to  the  arguments  for  an   argumentative  theory.  Behavioral  and  Brain  Sciences,  34(02),  92–93.    

Using evolutionary thinking to cut across disciplines

Hugo Mercier. Philosophy, Politics and Economics Program ..... bias is given free reins, as people are unlikely to critically evaluate their own arguments.

233KB Sizes 0 Downloads 147 Views

Recommend Documents

Using evolutionary thinking to cut across disciplines ...
act of accepting a belief because we have found a good enough reason to support it is ... the degree of support of one representation for another. Given that the ...... Hoboken, NJ: Wiley. Kuhn, D. (2011). What people may do versus can do. Behavioral

Evolving Nash-optimal poker strategies using evolutionary ...
Evolving Nash-optimal poker strategies using evolutionary computation.pdf. Evolving Nash-optimal poker strategies using evolutionary computation.pdf. Open.

Physical Mapping using Simulated Annealing and Evolutionary ...
Physical Mapping using Simulated Annealing and Evolutionary Algorithms. Jakob Vesterstrøm. EVALife Research Group, Dept. of Computer Science, University ...

Designing Electronic Circuits Using Evolutionary ... -
Dept. of Computer Studies, Napier University, 219 Colinton Road, Edinburgh, ...... in a PC and may be written to and read from directly by the host computer. The.

Designing Electronic Circuits Using Evolutionary Algorithms ...
6.1.2 Conventional Circuit Design versus Evolutionary Design. The design of ... 6. 2 EVOLVING THE FUNCTIONALITY OF ELECTRONIC CIRCUITS. Up until ...... input logic functions plus all possible two-input multiplexer functions. Figure 6.19 ...

Using illustrations to invoke deeper thinking about ...
Jan 2, 2001 - when they make choices to direct the learning activity. ... have access to specific information about the animals or insects that the children will be ... with a plastic cup and a large index card—the children scoop up the.

how to improve critical thinking using educational ...
designed flowchart-like diagrams called argument maps or trees. ... Able is designed to be used by novices who have had no prior instruction in the general.