LEARNING TO REASON: A REASON!-ABLE APPROACH TIM VAN GELDER Department of Philosophy, University of Melbourne, Parkville VIC 3052 Australia Email: [email protected] How are general informal reasoning skills acquired? Little research has been done on this topic. Two hypotheses dominate. According to the strong situated learning hypothesis, there are no general informal reasoning skills (only context- or domainspecific skills) and so nothing can be done to improve them. According to the practice hypothesis, general informal reasoning can be improved through intensive quality practice. These hypotheses were evaluated in the context of a one-semester undergraduate reasoning course based on Reason!, a software environment for quality practice. Results provide tentative support the practice hypothesis.

1.

Background

Informal reasoning is negotiating the webs of inferential dependence among sentences of a natural language. It contrasts with formal reasoning, of the kind found in mathematics, computer science, puzzles, and games such as chess. Informal reasoning involves activities such as distinguishing principal claims from the reasons or evidence provided in their support; supporting claims with reasons; evaluating the quality of reasons; challenging reasons, and rebutting challenges; and evaluating the overall case for a claim. Informal reasoning is closely related to informal logic (which tends to focus attention narrowly on the claim/reason relation) and argumentation (the deployment of informal reasoning in attempts to resolve disputes). Informal reasoning is a central component of critical thinking. How good are people are informal reasoning? U.S. studies indicate that although some are quite good at it, average levels are quite poor [13, 27]. For example, in the most thorough study of this topic, Kuhn found that a majority of the population cannot reliably produce genuine evidence for their opinions, entertain counterarguments, or rebut counterarguments. I have not found comparable studies in an Australian context, but it is plausible they would give broadly similar results. This is supported by experience in teaching. For example, I gave 95 University of Melbourne undergraduates a homework exercise in which they were asked to "analyse and evaluate the main argument" of a particular book chapter. Almost none performed the task adequately; indeed, very few even understood what the task was. It is widely assumed that responsibility for improving higher cognitive skills such as informal reasoning rests with the educational system [e.g., 33]. Does it deliver? People who are more highly educated are better at informal reasoning [13], but this may be only because the education system selects for reasoning skill. Informal reasoning skills also improve during education [25], but this may be largely due to

van Gelder, T. J. (2000) Learning to reason: A Reason!-Able approach. In C. Davis, T. J. van Gelder, & R. Wales ed., Cognitive Science in Australia, 2000: Proceedings of the Fifth Australasian Cognitive Science Society Conference. Adelaide: Causal.

maturation. There is remarkably little evidence that education substantially improves informal reasoning once other variables have been factored out, and some authors have argued for pessimism on this score [26]. In any case, the low average levels of ability indicate that general education is not being effective enough. What about educational strategies specifically targeted on informal reasoning? Although the evidence here is mixed, on balance it indicates that such strategies are not very effective. For example, McMillan reviewed all studies he could find of deliberate attempts to improve critical thinking, and found that overwhelmingly these interventions failed to yield significant benefits [19]. Nickerson et al. reached a similar verdict in their review of general thinking skills programs [22]. I reviewed all prior studies of one-semester undergraduate courses in critical thinking or informal logic, and found little reason to think they make any appreciable difference [29]. The points made so far indicate that we need new, improved educational methods. Any such method is going to have to be solidly grounded in knowledge, from cognitive science, of how informal reasoning skills are acquired [3]. What, then, is known about acquisition of informal reasoning skills? 2.

How are informal reasoning skills acquired? Two Hypotheses

Unfortunately, there has been very little research targeted on the problem of how general informal reasoning skills are acquired. For example, a recent review chapter entitled "Acquiring Intellectual Skills" [31] contained just two pages on informal reasoning skills, and says little about how those skills are improved. There has also been lots of research in the psychology of reasoning, but this also has ignored the problem of acquisition. Still, there has been considerable research on development of cognitive skills and of expertise in complex domains. This research gives rise to two contradictory hypotheses: The Quality Practice Hypothesis. One of the most important lessons from research on skill acquisition has been that practice is the crucial ingredient [e.g., 5]. However, the nature of that practice is crucial. First, psychologists have identified various features of practice leading to higher rates of improvement and levels of achievement in cognitive skills. Thus practice should at least be (a) deliberate, i.e., self-monitoring with full concentration; (b) graduated, so that mastery of easier levels precedes practice at more advanced levels; (c) under supervision providing scaffolding, explicit guidance and feedback; and (d) based on learning by study of worked examples [6, 7, 30, 31]. Second, if we are to see improvements in general informal reasoning skills, the practice must itself be suitably general. By this I do not mean that the practice should be abstract and content-free. Rather the general skills are practiced in a wide variety of domains, so that the skills become abstracted from any particular content. Finally, it is clear that mastering informal reasoning is a very challenging business, comparable to an adult gaining fluency in a second language, or reaching a high level of expertise in a sport or profession. Practice over a few hours or weeks is unlikely to produce dramatic gains. As with other complex,

challenging skills, we should expect appreciable gains to accrue only with intensive practice over long periods. The quality practice hypothesis, then, is that deliberate, graduated, supervised, worked-example-based practice in a wide variety of domains and sustained over long periods can generate substantial improvements in general informal reasoning skills. The Strong Situated Learning Hypothesis. Although the quality practice hypothesis can seem obvious or trivial, it does stand in some tension with much of the thinking found in what is known as the "situated cognition" literature. A range of views can be found in this literature, so in order to maximize contrast with the quality practice hypothesis, I focus here on the most extreme version, which can be called the Strong Situated Learning Hypothesis. According to this hypothesis, all thinking skills are inherently tied to particular contexts or contents, and so there is really no such thing as general thinking skills; for that reason, such skills cannot be acquired through practice. One consequence is that practice on a skill in one context will not "transfer" to other contexts. Another consequence is that training on general, abstract rules and principles will yield little benefit [4, 15, 16, 20]. Preliminary Support for the Quality Practice Hypothesis. The situated learning hypothesis draws support from many studies finding failure to transfer, and also from evidence of the kind described above concerning how bad people typically are at general informal reasoning, and how difficult it is to improve such skills through education. For such reasons, over the course of this century, the situated learning hypothesis has come to occupy a dominant position. In the past decade, however, a compromise position has emerged. Anderson, Reder & Simon, for example, have argued: In general, situated learning focuses on some well-documented phenomena in cognitive psychology and ignores many others: while cognition is partly context-dependent, it is also partly contextindependent; while there are dramatic failures of transfer, there are also dramatic successes; while concrete instruction helps, abstract instruction also helps... [2] In the case of informal reasoning, there is accumulating evidence that the strong "situated" view is too crude. First, it is indisputable that there are general informal reasoning skills [8], and that some people have acquired high levels of expertise. For example Kuhn's sample included some postgraduate students in philosophy. She notes that these philosophers all exhibited "perfect" performance, and that this proves that it is possible to have expertise in reasoning itself, independent of any particular content. Second, recent research has made clear that for some specific kinds of reasoning, people do have general skill, this skill can be acquired, and that practice on these skills transfers —that, in short, for some specific components of informal reasoning the practice hypothesis is roughly right [12, 17, 23, 28]. Third, there are some educational strategies that do appear to enhance informal reasoning.

In Philosophy for Children classes, primary school children engage in lengthy group discussion of philosophical problems. With remarkable consistency, studies of this program have found significant gains in reasoning skills [18, 21]. Similarly, a recent meta-analysis found that sustained involvement in debating (e.g., participation in "forensics" in the U.S.) produces substantial improvement in general informal reasoning [1]. Both Philosophy for Children and debating actively engage their participants in informal reasoning on a range of different topics. 3.

Evaluating the Hypotheses

It appears, then, that the truth probably lies somewhere in between the two extremes defined by radical versions of the strong situated learning and quality practice hypotheses. The interesting issue is not which one is true, but the extent to which both are true. We need to answer questions such as how much reasoning skills can be improved through practice, and what kinds of practice are most beneficial. Addressing these questions in a systematic empirical manner requires that we have large numbers of subjects engaging in sustained quality practice of various kinds, gather data on their practice activities and on the amount they improve, and analyse the data for interesting relationships. Conducting a study of this kind presents great practical challenges. How do you get subjects doing lots of hard practice? How do you ensure that what they do is quality practice? How do you measure the amount of practice they are doing? And how do you measure their improvement? In 1999 I conducted a study which took advantage of the fact that some of these challenges had already been met in an undergraduate reasoning course at the University of Melbourne. This is a course about reasoning, but it is also intended to enhance students' ability to reason. It is taught twice each year. In the first semester 1999, it was taught in a traditional "chalk and talk" manner, with the usual round of lectures, tutorials, homework, and exams. In the second semester it taught using an alternative method based on lots of practice using a new software package, Reason!. The new course design was introduced with the aim of doing better than the traditional approach at improving students' reasoning skills. To evaluate whether it was achieving this goal, I assessed improvement by pre- and post-testing both semesters. Although the initial goal was to see whether the new method outperformed the old, I realised that this same exercise could be used to gain insight into the relative merits of the situated learning and quality practice hypotheses. 4.

Quality Practice: The Reason! Package

The best way to have people engage in quality practice is to have them coached, individually or in small groups, by human experts. Obviously, however, this would be far too costly. The challenge then becomes to find another way to improve the quality of practice. Over the past few years colleagues and I have been developing the Reason! software package. Reason! is a PC program designed to (a) introduce students to the fundamentals of informal reasoning, and (b) provide an environment for quality practice of emerging reasoning skills. Reason! is designed to leverage the

available human intelligence (that of the students themselves, their peers, the instructor, and tutors) to maximum effect. Although Reason! may never be as good as individual attention from an expert human coach, it does improve the quality of practice in many ways. Currently it provides •

scaffolding for students' activities based on graphical tools for vividly representing reasoning and users' evaluations of that reasoning



capacity to handle arbitrarily complex reasoning on any topic, including multiple reasoning "players," multiple evaluative perspectives, and multiple models of a given piece of reasoning



support for analysing and evaluating texts, or for producing one's own arguments



context-sensitive guidance at every point in the complex processes of informal reasoning;



feedback mechanisms based on worked examples;



graduated exercise sets;



extensive help, including guides and a glossary.

Figure 1. Screenshot of the Reason! learning environment.

A student using Reason! to evaluate the main argument of a text (say, a letter to the editor) would first gradually construct an argument tree, a graphical representation of the various parts of the reasoning and how they fit together. The student then goes to the evaluation phase, and enters evaluative judgements at appropriate places on the tree. At any time the student can click anywhere on the tree and will be prompted to consider the most pertinent issue arising at that point (e.g., whether the selected premise is true). The result is an easy-to-understand graphical representation of the structure of the reasoning, overlaid with the students' own opinion as to its quality. In the final phase of the process, the student prepares a report which includes the argument tree, a reformulation of the reasoning in her own words, and a paragraph summarising her evaluation. 5.

Reason!-based intensive practice

In P003, students engaged in Reason!-based practice over a period of approximately 16 weeks. They were required to hand in homework consisting of exercises from 5 modules distributed over the 16 week period. Each exercise was quite labourintensive if done properly. The number of exercises varied from 14 in the first module to 3 in the final module. Additionally the students did 5 module tests and received feedback on the first four. It is impossible to be very exact about how many hours practice they actually did, but they were officially expected to devote around 10 hours per week and clearly many students were doing at least this much. There is no question that they were doing a great deal more practice than students in a traditional course. 6.

Measuring Improvement

Students in both semesters were pre- and post-tested using the Watson-Glaser Critical Thinking Appraisal (WGCTA) [32]. For decades this has been the most widely used research instrument in the area of critical thinking and informal reasoning. It is a multiple-choice test taking approximately 45 minutes. It has two forms; I used a crossover design to allow for any difference between the two forms. Pre-testing was in the first week of semester; post-testing in the final week (week 12). Some researchers have reservations about the WGCTA, not least because it is a multiple choice test [9]. For this reason I also pre- and post-tested students in the Reason! group with another specially devised test. Students were presented with a short (one paragraph) argumentative text, and were asked to identify the main conclusion, reformulate the reasoning, and to evaluate the reasoning. Average scores were low, partly because I used a very strict scoring scheme, but partly also because students found the task very difficult. The test faithfully reflected the nature of the skills covered in the course.

7.

Results

The results are summarized in Table 1 and Figure 2. WGCTA Table 1: Results of pre- and post-testing two versions of P003.

Traditional course, n=32

Score (WGCTA: # correct/80; Written test: raw score) Improvement (mean paired difference)

Written test

Reason!-based course, n=53; 11 weeks b/w tests

Reason!-based course, n=60; 15 weeks b/w tests

Pre

Post

Pre

Post

Pre

Post

63.8

64.0

61.3

64.9

1.81

3.00

0.18

3.58

1.19

0.282%

5.84%

65%

Effect Size

0.02

0.41

0.51

p

0.44

0.0004

.001

r

0.02

.45

.38

% improvement.

The traditional group showed no appreciable gain in reasoning skill; this is consistent with previous studies [29]. The Reason!-based intensive practice group, however, showed substantial gains, of approximately the same magnitude, on both tests, gains that are very unlikely to be due merely to random variation. 8.

Discussion

In the study, students doing intensive quality practice over a semester improved their general informal reasoning skills by approximately .45 of a standard deviation (averaging over the two tests). Two questions arise. Is this gain noteworthy? And what can we infer about the quality practice hypothesis? While 0.45 is generally considered a "medium" effect size [10], it is comparable to typical effect sizes for some of the most effective techniques in higher education, such as the personalised system of instruction [14]. More importantly, it is greater than the gain found in all comparable prior studies (i.e., studies of the efficacy of one-semester informal reasoning or critical thinking courses), and is much larger than the gain found in the students taking the same course taught by traditional means. Additionally, there are some reasons to suspect that the gains actually may in fact be larger than the data indicate. First, improvement on the WGCTA may have been reduced by a ceiling effect, since many students put in almost flawless performances even on the pre-test. Second, the WGCTA is reputed to underreport gains [1]. Third, the effect size may increase if certain corrections are made [11].

Figure 2: Comparison of two informal reasoning courses for improvement measured on the WGCTA. The Reason!-based course involved intensive quality practice. On the other hand, the gain in the Reason!-based course looks less impressive when compared with results found in some other studies using the WGCTA. For example Pascarella [24] found that first-year college students improved around 0.75SD, and merely attending college accounted for about .44SD (when compared with noncollege controls). In other words, going by Pascarella's results, we should expect college students to improve a few tenths of a standard deviation over one semester even if they don't take any reasoning course. (This raises interesting questions about reasoning courses, such as the traditionally-taught version of P003 in this study, which find no improvement.) Second, the gain in the Reason!-based course is only slightly larger than the average gain found in studies of debating, forensics and communication skills courses [1]. In other words, put crudely, to improve your reasoning you might just as well take a good debating course. Another reason to avoid getting too excited is that my own casual, subjective impression looking at the performance of the students on the written post-test, was that the improvement was depressingly slight, compared with both our hopes and students' abilities as exhibited in their homework. So what does all this imply for the quality practice hypothesis? In the most general terms, I take it to be consistent with the compromise position mooted above. Contrary to what radical situated learning theorists maintain, it looks like general informal reasoning skills can be improved, but they are right to claim that such improvements are very difficult to achieve. Further, I tentatively attribute the gains

to the intensive quality practice, as facilitated by the Reason! software package, rather than to any other feature of the semester 2 course (e.g., the instructor). 9.

Future Directions

Investigating the quality practice hypothesis in the manner described above is a bit like opening a quail egg with a sledgehammer. A more careful and comprehensive set of studies is needed to gain more detailed insight into the relationship between quality practice and improvement. In particular, future studies will have to collect much more fine-grained information about the nature and amount of practice that subjects are doing, and more sensitive data on improvement. There are plans to implement an online version of the Reason!-package, which would allow us to gather large amounts of detailed data on practice activities. Gathering better data on improvement may be possible using another instrument, such as the GRE Writing Assessment Task, recently developed by Educational Testing Service. 10. References 1.

Allen, M., Berkowitz, S., Hunt, S., & Louden, A. (1999) A meta-analysis of the impact of forensics and communication education on critical thinking. Communication Education, 48, 18-30. 2. Anderson, J. R., Reder, L. M., & Simon, H. A. (1996) Situated learning and education. Educational Researcher, 25, 5-11. 3. Bruer, J. T. (1993) Schools for Thought: A Science of Learning in the Classroom. Cambridge MA: MIT Press. 4. Brown, J. S., Collins, A., & Duguid, P. (1989) Situated cognition and the culture of learning. Educational Researcher, 18, 34-41. 5. Chase, W. G., & Simon, H. A. (1973) The mind's eye in chess. In W. G. Chase ed., Visual information processing. New York: Academic, 215-281. 6. Ericsson, K. A., & Charness, N. (1994) Expert performance. American Psychologist, 49, 725-747. 7. Ericsson, K. A., & Lehmann, A. C. (1996) Expert and exceptional performance: evidence of maximal adaptation to task constraints. Annual Review of Psychology, 47, 273-305. 8. Falmagne, R. J. (1990) Language and the acquisition of logical knowledge. In W. F. Overton ed., Reasoning, Necessity, Logic: Developmental Perspectives. Hillsdale, NJ: Erlbaum, 111-31. 9. Fisher, A., & Scriven, M. (1998) Critical Thinking: Its Definition and Assessment. EdgePress. 10. Hedges, I. V., & Olkin, I. (1985) Statistical Methods for Meta-Analysis. Orlando FL: Academic Press. 11. Hunter, J., & Schmidt, F. (1990) Methods of Meta-Analysis: Correcting for Error and Bias in Research Results. Beverley Hills CA: Sage. 12. Kosonen, P., & Winne, P. H. (1995) Effects of teaching statistical laws on reasoning about everyday problems. Journal of Educational Psychology, 87, 33-46.

13. Kuhn, D. (1991) The Skills of Argument. Cambridge: Cambridge University Press. 14. Kulik, J. A., Kulik, C. C., & Cohen, P. A. (1979) A meta-analysis of outcome studies of Keller's personalized system of instruction. American Psychologist, 34, 307-318. 15. Lave, J. (1988) Cognition in Practice: Mind, mathematics and culture in everyday life. New York: Cambridge University Press. 16. Lave, J., & Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation. New York: Cambridge University Press. 17. Lehman, D. R., Lempert, R. O., & Nisbett, R. E. (1988) The effects of graduate training on reasoning: Formal discipline and thinking about everydaylife events. American Psychologist, 431-442. 18. Lipman, M., & Gazzard, A. (1988) Philosophy for Children: Where are we now? Thinking, S2-S11. 19. McMillan, J. (1987) Enhancing college student's critical thinking: A review of studies. Research in Higher Education, 26, 3-29. 20. McPeck, J. E. (1990) Teaching Critical Thinking: Dialogue and Dialectic. New York: Routledge. 21. Morehouse, R., & Williams, M. (1998) Report on student use of argument skills. Creative and Critical Thinking, 6, 14-20. 22. Nickerson, R. S., Perkins, D. N., & Smith, E. E. (1985) The Teaching of Thinking. Hillsdale NJ: Erlbaum. 23. Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987) Teaching reasoning. Science, 238, 625-631. 24. Pascarella, E. (1989) The development of critical thinking: Does college make a difference. Journal of College Student Development, 30, 19-26. 25. Pascarella, E. T., & Terenzini, P. T. (1991) How College Affects Students: Findings and Insights from Twenty Years of Research. San Francisco: JosseyBass. 26. Perkins, D. N. (1985) Postprimary education has little impact on informal reasoning. Journal of Educational Psychology, 77, 562-71. 27. Perkins, D. N., Allen, R., & Hafner, J. (1983) Difficulties in everyday reasoning. In W. Maxwell & J. Bruner ed., Thinking: The Expanding Frontier. Philadelphia PA: The Franklin Institute Press, 177-189. 28. Stenning, K., Cox, R., & Oberlander, J. (1995) Contrasting the cognitive effects of graphical and sentential logic teaching: reasoning, representation and individual differences. Language and Cognitive Processes, 10, 333-354. 29. van Gelder, T. J. (2000) The Efficacy of Informal Reasoning Courses. Preprint No. 1/98, University of Melbourne Department of Philosophy. 30. VanLehn, K. (1996) Cognitive skill acquisition. Annual Review of Psychology, 47, 513-39. 31. Voss, J. F., Wiley, J., & Carretero, M. (1995) Acquiring intellectual skills. Annual Review of Psychology, 46, 155-81. 32. Watson, G., & Glaser, E. M. (1980) Watson-Glaser Critical Thinking Appraisal, Forms A and B. New York: Psychological Corporation.

33. West, R. (1998) Learning for Life: Review of Higher Education Financing and Policy. Canberra: Australian Government Publishing Service.

van Gelder, TJ (2000) Learning to reason: A Reason ...

college students to improve a few tenths of a standard deviation over one semester even if they don't take any ... Orlando FL: Academic Press. 11. Hunter, J.

243KB Sizes 0 Downloads 134 Views

Recommend Documents

Critique of Pure Reason
accomplishments of the I 77 os leading up to the Critique. The philosophical works of 1762-6+ Around the time of the Nova dilw:idatio, Kant published two other ...... He thus assumed as incontrovertible that even in fire the mat ter (substance) never

Critique of Pure Reason
general introduction in which two of the world's preeminent Kant schol ars provide a succinct summary of .... ship in the English-speaking world during the second half of the twen tieth century, and serving as both ...... cluded a fourth part, which

CRITICAL THINKING IN PHYSIOLOGY: A REASON! - CiteSeerX
ABSTRACT. To help improve their critical thinking skills, undergraduate science students used a new ... new stand-alone computer software package Reason!. This program is ..... Journal of College Student Development, 30, 19-26. Pascarella ...

Reason, Reasons & Normativity
of 2006, and at a conference in Bowling Green, and at CUNY's Graduate Center in the same ... The point is probably limited to practical reasons and reasons for emotions, and does not apply to ... reasons by rational creatures can nevertheless be iden

Reason, Reasons & Normativity
take one of the eligible options our intentions are based on the reasons for the ..... confidence in one's beliefs, and also degrees of belief meaning degrees of ...

What's a Good Reason to Change? Motivated ...
technology, and rising customer expectations are seen to. Denise M. Rousseau, H. ... information-processing factors in shaping interpretations of change. By considering ..... Hypothesis 10: The degree of implementation will be posi- tively related to

Reason, Reasons & Normativity
Think of it this way (and I will focus here on practical reasons): ... by rational creatures can nevertheless be identified as facts calling for belief, because ... Page 3 ...

CRITICAL THINKING IN PHYSIOLOGY: A REASON! - CiteSeerX
How good are our students at this kind of critical thinking? Informal observations ... new stand-alone computer software package Reason!. This program is ...

Tim van Gelder April 1998
knots. It blunts our wisdom, misdirects our compassion, clouds whatever insights into the human condition we manage to acquire. It is the chief artisan of the.

Crime Reason and History: A Critical Introduction to Criminal Law ...
Crime Reason and History: A Critical Introduction to Criminal Law (Law in Context) by Alan Norrie. ISBN 0297821504. Introduces students to the nature and ...

What's a Good Reason to Change? Motivated ...
changes in employment relationships worldwide (Kanter,. 1989). .... removed from the supervisor who offers the account; Bies ..... health care as a business.

Reason, Reasons & Normativity
In conclusion we can say that Reason does not make reasons into reasons (Reason ... I will basically follow Grice, though my understanding of the way the two ideas ..... further: it may be defrauding a person of his money, or it may be wasting ...

Reason Three approachesparticipative inquiry.pdf
... a transcendental theology which sees man (sic) in the image of God and. thus outside his creation (Baring and Cashford, 1991). So while on the one hand the ...

Reason without Reasons For
This example illustrates what I'll call the valence account of reasons. ... at the 2016 Iowa Philosophical Society conference, and the folks I button-holed ..... be any way to single out one of the facts as the genuine reason for the conclusion.

Reason Three approachesparticipative inquiry.pdf
Try one of the apps below to open or edit this item. Reason Three approachesparticipative inquiry.pdf. Reason Three approachesparticipative inquiry.pdf. Open.

OLD3 reason and imagination.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. OLD3 reason ...

Payday Lending: Protecting or Harming ... - Reason Foundation
project the costs out over a one-year period, would translate to an APR of 390 ... Trade Commission about lenders and debt collectors, and filed for Chapter 7 ... According to a George Washington University School of Business survey of ...