Citation:

 
 Mandel,
D.
R.
(2008,
June).
Judgment
under
uncertainty.
Intelligence
Analyst
 Training
Newsletter,
7‐9.
 
 Judgment
under
Uncertainty
 David
R.
Mandel*
 Intelligence
analysis
may
oftentimes
be
described
as
an
exercise
in
making
judgments
 under
conditions
of
uncertainty.
It
involves
many
of
same
aspects
of
judgment,
broadly
 construed,
that
have
been
studied
for
decades
by
experimental
psychologists— hypothesis
testing,
causal
inference
and
explanation,
prediction,
assessments
of
risk
and
 probability,
and
so
on.

 
 What
has
the
behavioral
science
of
human
judgment
revealed?
First
and
foremost,
it
 has
shown
that
humans
systematically
violate
logical
principles
to
which
that
most
 theorists
agree
judgment
ought
to
conform
if
they
are
to
be
regarded
as
coherent.
 These
systematic
violations
are
called
biases
and
they
are
usually
attributed
to
the
 informal
strategies
or
rules
of
thinking
that
people
follow,
which
theorists
call
heuristics.
 The
“heuristics
and
biases”
approach,
pioneered
by
Amos
Tversky
and
Daniel
 Kahneman,
have
revealed
many
fundamental
insights
about
human
judgment.
One
of
 the
best
ways
to
learn
about
these
insights
is
to
test
oneself
on
a
sample
of
the
same
 types
of
problems
that
psychologists
have
devised
to
gain
insight
into
the
nature
of
 human
judgment.
Consider
the
following
example
adapted
from
Kahneman
and
Tversky
 (1972):

 
 Country
X
has
two
munitions
factories.
In
the
larger
factory
about
45
 missiles
are
produced
each
week,
and
in
the
smaller
factory
about
15
 missiles
are
produced
each
week.
In
any
given
year,
about
50%
of
the
 missiles
produced
at
each
factory
are
“A‐type”
missiles
and
50%
are
“B‐ type”
missiles.
The
exact
percentage
of
each
type,
however,
varies
from
 week
to
week.
Sometimes
the
percentage
of
A‐type
missiles,
which
 happen
to
pose
a
greater
threat,
is
more
than
50%
and
sometimes
it
is
 less
than
50%.
For
a
period
of
1
year,
a
foreign
spy
agency
recorded
the
 number
of
days
on
which
more
that
60%
of
the
missiles
produced
in
each
 factory
were
of
the
A‐type.
Which
factory
is
likely
to
have
had
more
such
 days?

 a. The
larger
factory
 b. The
smaller
factory
 c. About
the
same


 


If
you
are
like
the
majority
of
Kahneman
and
Tversky’s
participants
who
responded
to
a
 similar
problem
(different
cover
story
but
the
same
features),
then
you
would
have
 answered
“about
the
same”
when,
in
fact,
the
correct
answer
is
the
smaller
factory.
 Why?
Because
the
weekly
output
of
smaller
factory
represents
a
smaller
and,
hence,
 less
reliable
sample
of
the
yearly
output
than
that
of
the
larger
factory,
it
is
likelier
to
 register
more
weeks
in
a
given
year
in
which
the
percentage
of
A‐type
missiles
is
greater
 than
60%.
That
is,
it
is
more
likely
to
deviate
from
the
expected
value
of
50%
because
 the
variance
of
its
sampling
distribution
is
larger
than
for
the
large
factory.
Most
people
 tend
to
neglect
the
sample
sizes
of
the
two
factories.
Instead,
they
judge
on
the
basis
of
 how
representative
each
factory
is
of
the
country’s
overall
missile
production.
Because
 they
both
produce
about
50%
of
A‐type
missiles
in
a
year,
they
appear
equally
likely
to
 deviate
from
that
distribution.

 
 Use
of
this
“representativeness
heuristic”
for
assessing
the
probabilities
of
events
or
 event
sequences
can
also
help
explain
why
people
misperceive
the
relative
likelihood
of
 short‐run
chance
sequences.
For
instance,
assuming
that
the
probability
of
male
(M)
 and
female
(F)
births
is
roughly
equal
(p
≈
.5
in
each
case)
and
that
the
determination
of
 each
birth
is
independent
of
other
births,
people
tend
to
find
the
sequence
of
births
 FMFMMF
to
be
more
likely
than
the
sequence
MFMMMM,
despite
their
roughly
equal
 probability
(Kahneman
&
Tversky,
1972).
Indeed,
the
sequence
FMMFMF
is
viewed
as
 more
likely
than
the
sequence
MMMFFF,
despite
the
equal
number
of
males
and
female
 births
in
each.
These
misperceptions
of
chance
appear
to
be
due
to
the
widespread
 belief
that
local
sequences
will
represent
the
long‐run
sequences
from
which
they
are
 drawn.
Thus,
sequences
that
match
the
proportion
of
males
to
females
in
the
 population
are
seen
as
more
probable.
Likewise,
those
that
seem
to
alternate
in
an
 apparently
random
manner
are
also
viewed
as
more
likely.
In
fact,
however,
people
 overestimate
the
number
of
alternations
in
random
sequences
because
they
mistakenly
 believe
that
chance
is
a
self‐correcting
process.
This
type
of
erroneous
thinking
is
 believed
to
underlie
the
gambler’s
fallacy—namely,
the
gambler’s
tendency
to
believe
 that
after
a
losing
streak
a
win
is
“due.”
As
most
learn
the
hard
way,
this
isn’t
so.

 
 The
tendency
to
judge
probability
on
the
basis
of
representativeness
can
also
bias
 probability
assessments
by
detracting
attention
from
relevant
statistical
information
 even
when
other
qualitative
sources
of
information
are
entirely
non‐diagnostic.
 Consider
the
following
thumbnail
description
drawn
at
random
from
a
sample
of
70
 engineers
and
30
lawyers:

 
 Dick
is
a
30
year‐old
man.
He
is
married
with
no
children.
A
man
of
high
 ability
and
motivation,
he
promises
to
be
quite
successful
in
his
field.
He
 is
well
liked
by
his
colleagues.

 
 When
asked
how
likely
is
it
that
Dick
is
an
engineer
(or
in
another
experimental
 condition,
a
lawyer),
the
average
response
was
a
50%
chance,
regardless
of
whether
the
 base
rate
of
engineers
was
30%
or
70%
(Kahneman
&
Tversky,
1973).
However,
when


the
non‐diagnostic
description
was
omitted,
average
responses
correctly
approximated
 the
base
rates.
That
is,
participants
estimated
that
there
was
about
a
70%
chance
that
 Dick
was
an
engineer
(or
a
30%
chance
that
he
was
a
lawyer).
Evidently,
the
irrelevant
 information
presented
in
the
thumbnail
sketch
led
people
to
disregard
the
relevant
base
 rates
and,
instead,
assume
that
the
chances
were
fifty‐fifty.
Individuating
information,
 such
as
descriptively
rich
personality
profiles,
tend
to
be
much
more
salient
than
 numbers
such
as
base
rates
or
prior
probabilities.
They
can,
therefore,
compete
with
 less
salient
but
relevant
pieces
of
information
for
an
analyst’s
attention.
As
well,
it
may
 be
easier
to
formulate
causal
explanations
using
individuating
information—even
when
 it
is
non‐diagnostic—than
using
base
rates
or
other
quantitative
estimates.
Given
that
 people
prefer
to
give
and
receive
explanations
that
have
a
causal
narrative
structure,
 people
(including
analysts)
may
be
prone
to
giving
more
weight
than
deserved
to
 information
that
lends
itself
easily
to
a
causal
narrative.


 
 As
a
final
example,
consider
Kahneman
and
Tversky’s
(1982)
“Linda
problem.”
 Participants
were
asked
to
read
the
following
sketch
of
a
woman
named
Linda:

 
 Linda
is
31
years
old,
single,
outspoken,
and
very
bright.
She
majored
in
 philosophy.
As
a
student
she
was
deeply
concerned
with
issues
of
 discrimination
and
social
justice,
and
also
participated
in
anti‐nuclear
 demonstrations.

 
 [Note
that
this
study
was
conducted
about
3
decades
ago
when
references
to
 anti‐nuclear
demonstrations
would
have
been
normal]
 
 In
one
variant
of
the
task,
participants
were
asked
to
indicate
which
of
two
statements
 was
more
probable:

 
 a. Linda
is
a
bank
teller.
 b. Linda
is
a
bank
teller
and
active
in
the
feminist
movement.

 
 About
85%
of
statistically
naive
participants
and
about
50%
of
participants
trained
in
 statistics
incorrectly
chose
the
second
option
as
the
more
likely
of
the
two,
presumably
 because
of
the
personality
sketch
was
more
representative
of
a
feminist
who
happened
 to
be
a
bank
teller
than
of
a
woman
who
was
merely
a
bank
teller.
In
fact,
however,
 since
the
set
of
bank
tellers
includes
the
subset
of
feminist
bank
tellers,
the
conjunction
 selected
by
the
majority
of
participants
has
to
be
equally
or
less
probable
than
one
of
its
 constituent
elements.
This
requirement
of
extension
logic
is
known
as
the
conjunction
 rule:

 
 P(A
and
B)
≤
P(B).
 
 And,
the
tendency
for
people
to
violate
it
when
individuating
information
leads
them
to
 think
that
the
conjunction
(namely,
bank
teller
AND
feminist)
is
more
probable
than
one


of
its
constituents
(namely,
bank
teller)
is
known
as
the
conjunction
fallacy.
The
 tendency
of
statistical
trained
participants
to
commit
the
conjunction
fallacy
was
even
 greater
when
the
relevant
statements
were
embedded
within
a
list
of
eight
statements
 and
each
had
to
be
assessed
in
terms
of
its
probability.
In
that
variant
of
the
task,
over
 80%
of
both
trained
and
untrained
participants
committed
the
conjunction
fallacy,
 assigning
higher
probabilities
to
the
feminist
bank‐teller
conjunction
than
the
bank‐ teller
description
alone.
Apparently,
individuating
information
can
not
only
lead
people
 to
disregard
less
salient
but
diagnostic
information
such
as
base
rates,
it
can
also
lead
 people
to
disregard
logical
requirements
of
reasoning
at
the
expense
of
coherent
 judgment.

 
 I
have
highlighted
here
just
a
few
of
the
ways
in
which
judging
on
the
basis
of
 representativeness
can
bias
people’s
judgments.
The
representativeness
heuristic,
 moreover,
is
just
one
of
many
heuristics
that
people
use
to
make
judgments,
each
of
 which
can
lead
to
a
variety
of
systematic
errors
in
human
judgment.

 
 If
you
would
like
to
learn
more
about
the
science
of
judgment
under
conditions
of
 uncertainty,
then
consider
register
for
a
one‐day
seminar
on
the
topic
that
will
be
held
 in
Ottawa
on
June
23.
For
more
information,
please
contact....

 
 References
 
 Kahneman,
D.,
&
Tversky,
A.
(1973).
On
the
psychology
of
prediction.
Psychological
 Review,
80,
237‐251.
 
 Kahneman,
D.,
&
Tversky,
A.
(1982).
On
the
study
of
statistical
intuitions.
In
D.
 Kahneman,
P.
Slovic,
&
A.
Tversky
(Eds.),
Judgment
under
uncertainty:
heuristics
and
 biases
(pp.
493‐508).
Cambridge:
Cambridge
University
Press.

 
 Kahneman,
D.,
&
Tversky,
A.
(1972).
Subjective
probability:
a
judgment
of
 representativeness.
Cognitive
Psychology,
3,
430‐454.

 
 


*

David
R.
Mandel
is
an
experimental
psychologist
and
Group
Leader
of
the
Thinking,
 Risk,
and
Intelligence
Group
at
DRDC
Toronto.
He
may
be
reached
at
 David.Mandel@drdc‐rddc.gc.ca.
 


Citation: Mandel, D. R. (2008, June). Judgment under ...

hypothesis testing, causal inference and explanation, prediction, assessments of risk and probability, and so on. What has the .... discrimination and social justice, and also participated in anti‐nuclear demonstrations. ... greater when the relevant statements were embedded within a list of eight statements and each had to be ...

81KB Sizes 0 Downloads 199 Views

Recommend Documents

R-2008 - Anna University
2. MA2161 Mathematics – II*. 3. 1. 0. 4. 3. PH2161 Engineering Physics – II*. 3. 0 ...... resources: Growing energy needs, renewable and non renewable energy ...

R-2008 - Anna University
MA2161 Mathematics – II*. 3. 1. 0. 4. 3. PH2161 Engineering Physics – II*. 3. 0. 0 ...... resources: Growing energy needs, renewable and non renewable energy ...

June-2008.pdf
order or credit card over the telephone. insidetime. a voice for prisoners since 1990. the national newspaper for prisoners published. by Inside Time Limited, ...

D A. R - GitHub
policy, and “big data” issues in the geosciences including the design and construction ... novel applications of machine learning and data science to the analysis .... Aerosol-Cloud-Climate Interactions. Seattle, WA. 2017. |. PDF. Rothenberg,D.

Heuristics and Normative Models of Judgment under ... - Temple CIS
theory to everyday life, we should keep the following points in mind. First, this is a way to interpret probability, ... from given knowledge, and to revise previous beliefs in the light of new knowledge. Therefore, among other things, ..... Universi

R&D
Research and Development (R&D) Projects. Applying Logic ... 2)National Institute of Advanced Industrial Science and Technology (AIST). Evaluation 2007( AEA ...

Untitled - BK D R Luhar
Dec 2, 2012 - sit 37 RBiH f:4Tèdraft dryThir iTder 3FR drea fright 3T3it 3irder air perfer, 3I4 drivilia grit-Hsia frid-T3TIGT&, dditR H T2 drag driH differgaafhd it infiggyich felta, islfidt is firgsÅ¿taalded, fertia frigt & fish SKIHUS citics CitÃ

R&D Manual.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. R&D Manual.pdf.

R&D
Research and Development (R&D) Projects. Applying ... 2)National Institute of Advanced Industrial Science and Technology (AIST) ... ceramics parts application.

Judgment under Uncertainty: Heuristics and Biases
We use information technology and tools to increase productivity and facilitate ... degree to which A is representative of ... assessed by the degree to which he is.

MASU-Jan-June 2008.p65
Sl.No. Entries. Rust (grade). LLS (grade). 20. AIS-2006-3. 5.9. 4.8. 21. AIS-2006-4. 5.5. 6.0. 22. AIS-2006-5. 7.5. 7.0. 23. AIS-2006-6. 7.5. 7.0. 24. AIS-2006-7. 7.5.

MASU-Jan-June 2008.p65
Spike. Spike. Spike. 1000 grain yield height. 50% flowering tillers prod.tillers length thickness yield weight. Grain yield. 1.000. 0.49**. -0.22**. -0.08. 0.38**. 0.63**. 0.41**. 0.25**. 0.25**. Plant height. 1.000. -0.01. -0.43**. -0.90**. 0.23**.

MASU-Jan-June 2008.p65
'r' value. 't' value. 1. Goal and Structure. 0.399**. 2.715. 0.672**. 3.518. 2. Goal and Programming. 0.733**. 6.728. 0.084. 0.326. 3. Goal and Functioning. 0.495**.

MASU-Jan-June 2008.p65
area of 9.58 lakh hectares with the production of 14.41 lakh ... 15-22% of the area under groundnut is irrigated ... increased yield over the best check VRI Gn. 5.

MASU-Jan-June 2008.p65
The hybrids viz.,. H24 x LCR-1 and H24 x CLN 2123, their parents and a check susceptible to both the viral diseases (CO 3) were included in the experiment. Recently mature physiologically active leaf (fifth leaf from the top) of five randomly selecte

MASU-Jan-June 2008.p65
the maize and sorghum growers were old aged, possessed high educational status, did agriculture as ... extension agency, risk orientation and credit orientation.

MASU-Jan-June 2008.p65
Now, the area irrigated by tanks had ... channel, catchment area, foreshore area, tank bed etc. DHAN (2002) .... Officer (VAO) to check any encroachment in tank.

MASU-Jan-June 2008.p65
Jan 8, 1995 - CO(Gb)14 – An extra early duration photo-insensitive high yielding ... culture COLT 22/1 was developed by hybridization and pedigree ...

MASU-Jan-June 2008.p65
The test rig consists of 0.75 kW electric motor and a variable speed ... attached with fuel consumption meter was ... meter was placed across the basins and the.

MASU-Jan-June 2008.p65
(Jayaraman et al., 1976). Therefore surplus eggs have ..... P. and Patia, B.S. (1976) Development of dehydrated ... dehydrated apple powder. Unpublished thesis ...

June 2008 - Stormwater Quality Management Committee
Jun 1, 2008 - Managing trash, materials, and supplies. Vegetated cover considerations for close-out ..... Storage areas for construction materials, supplies and waste;. ○. Designated concrete washout areas; ... As is the case with the entire SWPPP,

MASU-Jan-June 2008.p65
dose (RDF) of NPK (40: 20: 20 kg N: P2O5: K2O ha-1), 100% + FYM (12.5 t ha-1), 100, 75 and 50% RDF (40: 20: 20, 30:15:15 and 20:10:10 kg N: P2O5: K2O.

MASU-Jan-June 2008.p65
buds were emasculated with the help of fine forceps and bagged to prevent the contamination of foreign pollen. The pistillate flower buds were simply bagged.