Memory in Inference: Some Groundwork Tom Stoneham University of York Presented at European Society for Philosophy and Psychology Conference, Lund University, August 2005 Every second year undergraduate reading Psychology knows about the distinction between working memory and long-term memory. In his lecture notes, Graham Hitch defines the former thus: Working memory refers to our limited capacity to hold and manipulate temporary information. Hitch's collaborator on the phonological loop model of working memory, Alan Baddeley, has given a rather more detailed definition: The authors' own definition of working memory is that it comprises those functional components of cognition that allow humans to comprehend and mentally represent their immediate environment, to retain information about their immediate past experience, to support the acquisition of new knowledge, to solve problems, and to formulate, relate, and act on current goals. (Baddeley and Logie, 1999, 28) More advanced students will know that not only is there some dispute about the correct model for working memory, but also that some short-term aspects of memory such as digit span can 'come apart from' executive working memory,1 and that there seem to be cases where individuals with exceptionally long digit span are using long term memory mechanisms for short-term/working memory tasks (e.g. Ericsson & Kintsch, 1995). What is undeniable is that working memory is a distinct and theoretically fruitful research topic. In contrast philosophers interested in memory pay scant attention to this distinction. They are interested in an apparent distinction between factual (a.k.a. propositional or 1

I.e. there is selective impairment.

1

semantic) memory and experiential (a.k.a. episodic or autobiographical) memory, and they recognize the role of memory, typically factual memory, in the inferential tasks mentioned by Baddeley, but they do not think that this function of memory raises any distinctive or fruitful questions. In fact, some go so far as to generalize from features of memory in inferential tasks to the nature of long term propositional memory (e.g. Burge, 1993, 463; 1997, 37; Dummett, 1993, 420; Owens, 1999, 318). My objective in this short paper is to give some philosophical reasons, specifically reasons based in normative epistemology, for drawing a distinction between two functions of memory, which distinction looks very similar to that between working and long-term memory. How closely related the two distinctions actually are is a matter for further work.

1. Epistemology of Memory We rely on our memories all the time. Even when we are being cautious and critical, doubting the veracity of some particular apparent memory, we are still relying on our memories. For example, if after reading the work of Elizabeth Loftus, I become suspicious of several of my cherished childhood memories, I am still relying on my memory of the books I read, and in reading and understanding the books I am relying on my memory of what various words mean, my memory of the social institutions and practices which rely upon eyewitnesses, and so on. To consistently refuse to rely upon one's memory would produce instant intellectual paralysis. But our memories are also fallible. So the epistemologist asks whether, and how, we are justified in relying on our memories. And it is very easy to see how asking this question generates a difficult problem: if our justification for relying upon our memories depends in part upon our having evidence that memory is, either in general or in the type of case we are

2

considering, generally accurate, then a vicious regress is inevitable. For how could we acquire that evidence without at some point relying upon our memory? Over the past 15 years several philosophers, especially Dummett, Burge, Owens, have come up with an ingenious line of response to this problem (there are differences between these authors, but I hope to catch a common core): when I remember something, that I am now having an apparent memory is no part of the justification for my now believing it. Suppose I remember that the date of the battle of Fulford was 20th September 1066, though I cannot remember how I learnt it. According to this theory, which we can call 'the preservative theory', my justification for currently believing that fact just is whatever justification I had when I originally formed the belief. What memory does is to preserve the justifiedness of belief, even when the subject does not remember the original justification.2 Memory is a cognitive mechanism which, when it works well, preserves justified beliefs. When I rely on memory I am not making a tacit assumption, in need of an impossible justification, that what memory serves up now is an accurate reflection of my past beliefs and experiences. Rather, when I rely on memory I am using a cognitive mechanism the function of which is to allow me to now exploit what I previously learnt without having to go through the learning process all over again. Given that is its function, I have to be sensitive to the possibility of malfunction, but I do not have to justify using it in the first place.

2. Three ways of going wrong The preservative theory of memory is designed to give an account of how we can be justified in relying on our memories despite the undeniable fallibility of memory. It gives our memories a prima facie authority, in the sense that memory beliefs are justified in the absence

2

In general epistemologists should distinguish between the property of being justified and the beliefs or experiences which comprise the justification in virtue of which a belief has this property. If memory is to be useful, it must be possible to have beliefs which are justified though their justifications do not persist.

3

of reason to doubt them, or to doubt the reliability of our memories on this point. But there is more than one way that our memories can lead us astray. (1) The obvious case, and the one which epistemologists are most concerned with, is when our memories distort or misrepresent our earlier beliefs and experiences. This can happen in a variety of ways at any of the three stages of encoding, storage and retrieval, and the seminal work of Elizabeth Loftus shows that it happens a great deal more often than we naïvely predict. Now there are some interesting and subtle issues about what counts as accuracy and inaccuracy, truth and falsity in memory which need to be handled with care if we are to avoid judging any deviation from the 'hall of records' model of memory as a malfunction. Even if all three stages of encoding, storage and retrieval involve interpretation and are influenced by other cognitive processes in the subject, it is still possible to draw a line between acceptable and deviant operations.3 (2) Even when the process of memory introduces no distortion or inaccuracy, relying on our memories may still result in unjustified belief, for the belief which was the input to the process, the one which has been preserved for us, may have been unjustified to start with. For example, we can imagine someone of a hawkish disposition who wants to go to war in Iraq coming to believe, on the basis of palpably flimsy evidence, that Saddam has weapons of mass destruction. Later he forgets the evidence he considered, but his memory preserves the belief that Saddam has WMDs. He has relied upon his memory but ends up with an unjustified belief. Yet his memory has not malfunctioned in any way. Call this the 'garbagein-garbage-out' problem. This initially appears to present a problem for the preservative view because, if memory has a prima facie authority, then in the absence of any evidence to the contrary, the subject is justified in his or her memory-belief. But clearly our imaginary hawk is not 3

For some very interesting ideas about how to characterize correctness in memory, see Russell Downham (Macquarie) 'The Living Past Remembered' (MS).

4

justified in later believing that Saddam has WMDs just because he remembers his earlier (and unjustified) belief. However, there is a way for the preservative theorist to deal with this problem, for he can distinguish two dimensions in the normative assessment of the later belief (Owens 1999, 322): the belief is unjustified, but, in the absence of any reasons to think he was earlier mistaken, the believer is rational in continuing to believe it. (3) There is a more subtle version of the garbage-in-garbage-out problem. Consider again my recollection of the date of the battle of Fulford. Suppose I had learnt this as a child while doing a school project by finding it on a tourist brochure about the area. This seems a fairly good way for a child to acquire a justified belief about such a thing. But as an overeducated adult I am aware that the people who write tourist brochures are not normally scholars or historians, that their concern for accuracy is focussed upon opening times rather than historical details, and further that the date in question is not one found in popular or school level books about the period. Which all amounts to saying that were I now to come across the date in a tourist brochure, my justification for believing it would be much more tentative. In fact, I may not even believe it on that evidence alone. My epistemic standards have changed and thus whether certain evidence justifies a belief has changed. But what of my recollection of the fact? If I just remember the date and not how I learnt it, is my memory belief justified? It seems that the preservative theory would have to say that it is (and that I am reasonable to continue believing it), but that seems wrong, for intellectual development of the kind described should lead us to reassess our earlier beliefs. Call this the 'intellectual maturity' problem.4

4

Notice that even if the storage aspect of a well-functioning memory is active, if it updates our memories in the light of our changing beliefs, this will not deal with the problem of intellectual maturity. At least, it will not deal with it on the assumption that we often remember what we have learnt considerably longer than we remember the event of learning it. If our subject's source for the belief about the battle of Fulford is not stored 'alongside' that belief, then her intellectual maturation will not have any impact on that memory, however dynamic the process of storage is.

5

My favoured response to the intellectual maturity problem is to qualify the preservative view and hold that a memory belief is only justified if one has good reason to think that the original belief was justified by one's current standards. Thus in the example just given, if I had studied medieval history in high school or at university, I might now reasonably think that my beliefs about the dates of minor battles would have been justified (by current standards) when acquired.5 But it is not my concern to defend that now, merely to point out the phenomenon.

3. Scratchpad Memory Back to the issue we started with and the distinction between long term and working memory. If we define 'inference' very broadly as a process of moving from thought to thought in accordance with some general rules or principles (rarely explicit) determining the relation of later thoughts in the sequence to earlier ones, then most6 of the phenomena studied by those interested in working memory are cases of inference. All inference involves memory, for the choice of which is to be the next thought in the sequence depends upon what has gone before. In some logical inferences, what has gone before may determine a unique next thought, but that is the exception rather than the rule. However, in all cases of inference, the previous thoughts in the sequence will place some constraint upon the subsequent ones. Thus continuing an inferential sequence requires one to recollect at least some of the thoughts that went before. We can think of this as 'scratchpad memory', for the earlier thoughts are being retained for the purpose of allowing us to work out the subsequent steps in the sequence. Doing this does not always require one to recollect the decision which led the previous thought to be included – often it suffices to recollect simply that the thought occurred at an earlier stage of the sequence. Thus the recollection involved in 5 6

I defend this view in 'Memory and Knowledge' (MS). The exception would seem to be digit span.

6

inference can suffer from the garbage-in-garbage-out problem, but it is less obvious that it can suffer from the intellectual maturity problem. To see this we should distinguish cases where an inference is continuous from where it is interrupted. There is a grey area consisting of interruptions which do not appear to affect the continuity of the inference, e.g. when I look out of the window at a bird while thinking through a problem, but this should not blind us to the existence of clear cases of both continuous and interrupted inferences. Once an inference has been interrupted, there is no constraint upon what can happen during the interruption. In particular, the subject may change her epistemic standards during the interruption, e.g. she comes to question a rule of inference like DNE. In which case, if she resumes the inference without going through all the previous workings but instead simply relies upon her previous conclusions, she will be vulnerable to the intellectual maturity problem. In contrast, during a continuous inference there do seem to be restrictions upon what else the subject is able to do, and in particular, it appears that the subject is not able to go through the intellectual process of changing her epistemic standards. In which case the intellectual maturity problem cannot arise. This claim has the status of a bold conjecture. It certainly seems to be contingent, for we are capable to a limited extent of 'thinking about two things at once', so it is hard to rule out a priori the possibility of a thinker engaging in an extended continuous inference while also going through a period of intellectual maturation. But a thinker for whom this was possible would be in a peculiarly difficult situation with respect to his own inferences: he would need to keep restarting them. So even if it is a contingent fact about us that we cannot undergo intellectual maturation while engaged in continuous inference, it is not best seen as a limitation but an epistemic enabling condition. The obvious exception to this is the process of reasoning about one's epistemic standards. But this is clearly a very special, and peculiarly problematic case for the

7

epistemologist, involving some sort of second-order reflection. Setting that aside it seems that the use of scratchpad memory in continuous inferences is immune to the intellectual maturity problem. In which case there are epistemological grounds for treating that function of memory separately.

4. Conclusions 1. Psychologists do and epistemologists should treat the role of memory in inferences as a special case of memory. 2. It was important to the identification of the epistemologically special case that we restricted attention to continuous inferences. The similarity between working memory and scratchpad memory suggests that we might be able to investigate the relevant notion of continuity empirically. 3. A common argument for the preservative theory emphasizes its success in dealing with the role of memory in inference. But the non-occurrence of the intellectual maturity problem in inference prevents the generalization to other cases of memory. 4. From an epistemological point of view, the usefulness of our inferential cognitive functions depends upon a contingent limitation of our minds.

8

References Baddeley, A.D., & Logie, R.H. (1999). Working memory: The multiple component model. In A.Miyake & P. Shah (Eds.), Models of working memory. New York: CUP. Burge, T. (1993). 'Content Preservation'. Philosophical Review, 102, 457-88. Burge, T. (1997). 'Interlocution, Perception, and Memory'. Philosophical Studies, 86, 21-47. Dummett, M. (1993). 'Testimony and Memory'. In The Seas of Language. Oxford: OUP. Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102, 211-245. Owens, D. (1999). 'The Authority of Memory'. European Journal of Philosophy, 7, 312-29.

9

Memory in Inference

the continuity of the inference, e.g. when I look out of the window at a bird while thinking through a problem, but this should not blind us to the existence of clear cases of both continuous and interrupted inferences. Once an inference has been interrupted, there is no constraint upon what can happen during the interruption.

101KB Sizes 0 Downloads 286 Views

Recommend Documents

Unified Inference in Extended Syllogism - Semantic Scholar
duction/abduction/induction triad is defined formally in terms of the position of the ... the terminology introduced by Flach and Kakas, this volume), cor- respond to ...

Randomization Inference in the Regression ...
Download Date | 2/19/15 10:37 PM .... implying that the scores can be considered “as good as randomly assigned” in this .... Any test statistic may be used, including difference-in-means, the ...... software rdrobust developed by Calonico et al.

Causal inference in motor adaptation
Kording KP, Beierholm U, Ma WJ, Quartz S, Tenenbaum JB, Shams L (in press) Causal inference in Cue combination. PLOSOne. Robinson FR, Noto CT, Bevans SE (2003) Effect of visual error size on saccade adaptation in monkey. J. Neurophysiol 90:1235-1244.

Unified Inference in Extended Syllogism - Semantic Scholar
... formally in terms of the position of the shared term: c© 1998 Kluwer Academic Publishers. ...... Prior Analytics. Hackett Publishing Company, Indianapolis, Indi-.

Inference in Incomplete Models
Program for Economic Research at Columbia University and from the Conseil Général des Mines is grate- ... Correspondence addresses: Department of Economics, Harvard Uni- versity ..... Models with top-censoring or positive censor-.

collective memory and memory politics in the central ...
2. The initiation of trouble or aggression by an alien force, or agent, which leads to: 3. A time of crisis and great suffering, which is: 4. Overcome by triumph over the alien force, by the Russian people acting heroically and alone. My study11 has

Short-term memory and working memory in ...
This is demonstrated by the fact that performance on measures of working memory is an excellent predictor of educational attainment (Bayliss, Jarrold,. Gunn ...

Nullable Type Inference - OCaml
Dec 11, 2002 - [1] Apple (2014): Swift, a new programming language for iOS and. OS X. Available at https://developer.apple.com/swift. [2] Facebook (2014): ...

inference-progressions-teaching - CensusAtSchool
Reinforcing & developing ANALYSIS statements. - Comparative descriptions of sample distributions. → always use variable, value, unit. → centres (medians), shift/overlap (position of middle 50% relative to each other), spread. (IQR – consistency

Variational Program Inference - arXiv
If over the course of an execution path x of ... course limitations on what the generated program can do. .... command with a prior probability distribution PC , the.

Variational Program Inference - arXiv
reports P(e|x) as the product of all calls to a function: .... Evaluating a Guide Program by Free Energy ... We call the quantity we are averaging the one-run free.

Nullable Type Inference - OCaml
Dec 11, 2002 - Imperative programming languages, such as C or Java deriva- tives, make abundant ... In languages using the ML type discipline, the option type type α option ..... //docs.hhvm.com/manual/en/hack.nullable.php. [3] Facebook ...

Application-Specific Memory Management in ... - Semantic Scholar
The Stata Center, 32 Vassar Street, Cambridge, Massachusetts 02139. Computer Science and ... ware mechanism, which we call column caching. Column caching ..... in part by the Advanced Research Projects Agency of the. Department of ...

Disrupted memory inhibition in schizophrenia
Feb 6, 2008 - a Research Group on Cognitive Science, Hungarian Academy of Sciences — Budapest University of Technology and Economics, Hungary.

inference-progressions-teaching - CensusAtSchool
show up consistently if we repeatedly sampled. → introduce ... Don't “POP in your PEE” – student ANALYSIS descriptions (PEE) often include the population.

Memory Management in Mac OS - Mindfire Solutions
Mar 6, 2002 - who are new to Mac development but had previous development experience on other operating system. ..... For example, you can get the current value of the Ticks global variable by calling the TickCount ... heap's free space, the Memory M

Memory Management in Mac OS - Mindfire Solutions
Mar 6, 2002 - In system software version 7.0 and later, suitably equipped Macintosh computers can take advantage of a feature of the Operating System known as virtual memory, by which the machines have a logical address space that extends beyond the

Inference Regarding Multiple Structural Changes in ...
Sep 17, 2010 - Osborn for providing us with the data used in the empirical example. ..... (2003) for an analysis of the impact of centering in covariance matrix.