Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

PERCEPTIONS OF MALAYSIAN SCHOOL AND UNIVERSITY ESL INSTRUCTORS ON WRITING ASSESSMENT Jayakaran Mukundan & Touran Ahour Department of Language and Humanities Education Faculty of Educational Studies University Putra Malaysia 43400 UPM SERDANG, Selangor Phone: (60 3) 8946 6000 Malaysia [email protected] Received/Accepted: 12 January 2009/ 26 August 2009

ABSTRACT This study is an attempt to find out the attitudes of writing teachers and lecturers towards the assessment of writing. For this purpose, a survey using questionnaire and semi-structured interview was conducted among ESL (English as A Second Language) university lecturers and school teachers and the data were analyzed descriptively and qualitatively. The results indicate that university lecturers (UL) prefer impressionistic scoring while school teachers (ST) prefer criterion-based scoring. Both groups express their preferences for explicit instruction of the evaluation criteria as well as in identifying students’ weaknesses and strengths. Unlike the common belief that grammar, vocabulary, and mechanics are the most essential criteria in scoring, the results show that fluency is more important than accuracy. It is also found that years of teaching experience are not an influential factor in their responses.

KEY WORDS English as a Second Language, Writing assessment, Impressionistic scoring, Criterion-based scoring, Explicit instruction, Evaluation criteria, Students’ weakness and strength in writing

1

Volume 9 No. 1, Agustus 2009 : 1-21 1. INTRODUCTION One important issue that requires more investigation is classroom-based writing assessment where the teacher is the sole assessor of the students’ writing. In such situations, improving the quality of students’ writing through diagnosing their strengths and weaknesses is of prime value. The direct test of writing in which students produce an actual sample of writing (Weigle, 2002), such as essays, is dominantly used in the classrooms and they are evaluated based on the impressionistic judgment of the teacher. Although this kind of evaluation might be considered simple, less timeconsuming, and more valid, it may have less reliability and may not show the actual writing ability of the students in different dimensions of writing. For this purpose, a well thought pre-specified scoring rubric that corresponds to the writing task and the purpose of the assessment is required. The problem is that teachers are so busy trying to teach different conventions of writing an essay or composition (e.g., introduction, body, conclusion, thesis statement, supporting sentences, different patterns of writing, linking words, and so on) in writing classes that they neglect the essential matter of creating explicit evaluative criteria and instructing them to the students. The fact is that, if the students are aware of the evaluation criteria in advance, by means of which their writing will be evaluated, they would try to prepare themselves accordingly and improve the quality of their writing in order to meet the expected criteria. In this case, the evaluation criteria can be used not only as an assessment instrument but also as a teaching tool. However, little attention, in research literature, has been paid to the teacher assessor in classroom-based assessment and the relevant issues such as their assessment beliefs and attitudes in different educational and cultural contexts (Davison, 2004) and English language teachers’ way of judgment (Clarke and Gipps, 2000). This study seeks to, theoretically and practically, analyze the important views on these issues. For this reason, we have surveyed the attitudes of the university lecturers (UL) and school teachers (ST), with different years of teaching experience, in an ESL situation to identify their opinions towards writing assessment in terms of a) impressionistic and criterion-based types of scoring; b) explicit instruction about evaluation criteria; c) purpose of 2

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

assessing students’ writing; and d) importance of different dimensions in students’ writing. Hence the following research questions are posed: 1. What are the attitudes of the university lecturers and school teachers toward the writing assessment? 2. What are the similarities and differences in the perception of university lecturers and school teachers toward writing assessment? 3. Do years of teaching experience influence the UL’s and ST’s views toward writing assessment? 2. BACKGROUND One way for assessing the writing performance of students is through the direct test of writing (i.e., essay writing), as opposed to completing test items because it involves students in “actual writing” (Weigle, 2002; Braddock, Lioyd-Jones, and Shoer, 1963; Eley, 1955) which means they produce “at least one piece of continuous text” (Hamp-Lyons, 1991a, p. 5). This kind of writing test is considered the most valid measurement of writing ability to gather information about the general level of the students’ writing proficiency (Schoonen, Vergeer, and Eiting, 1997; Diderich, 1946). Although in most writing classrooms the essays of the students are evaluated based on the impressionistic judgment of the teachers, criterionreferenced types of scoring such as “ESL Composition Profile” (Jacobs et al., 1981) and Test in English for Educational Purposes (TEEP) (Weir, 1988, 1990) provide useful diagnostic information for the teachers which is not possible through other methods (Jacobs et al., 1981; Raimes, 1990; Hamp-Lyons, 1991c). Analytic scoring scale (originated by Diederich et al., 1961, 1974 in L1), in this regard, has been proved to be more reliable than other types of scales such as holistic scale (White, 1984) and LIoye-Jones’ (1977) primary trait scoring scale (Bachman and Palmer, 1996; Jacobs et al. 1981; Hamp-Lyons, 1991, 1995; Huot, 1990; Weigle, 2002). Despite its wide use in writing assessment and research, holistic scoring which is faster, less expensive, and more valid (Perkins, 1983; White, 1984, 1985) has recently been criticized as a procedure that fails to provide sufficient information on writing performance (Elbow 1996). Hamp-Lyons (1995) argues that because of the complex and multi-faceted nature of writing, the writing of L2 students may show varied performance on 3

Volume 9 No. 1, Agustus 2009 : 1-21 different traits; subsequently, a great deal of information may be lost when assigning a single score to a piece of writing. In addition, it is “reductive, reducing the writer’s cognitively and linguistically complex responses to a single score” (Hamp-Lyons, 1991, p. 244), while “the quality is more than the sum of the rubricized parts” (Wilson, 2006, p. xv). In this vein, “the rating scale may confound writing ability and language proficiency” (Cohen, 1994, p. 316). Compared to the holistic scoring which is based on a scoring rubric, general impression grading in which criteria are never explicitly stated is less reliable (Weigle, 2002). This kind of grading, which is prevalent in writing courses in schools and universities all over the world, is merely based on the subjective judgment of the teachers and is less likely to distinguish students’ writing ability in different dimensions of writing. The reality is that in the assessment of writing ability, not all dimensions of the written product are considered equally important for evaluation. In composition courses, for example, more emphasis is given to the communicative effectiveness than specific language features (Weigle, 2002). There are some factors, as indicated by Cohen (1994), (e.g., time available for assessment, cost of assessment, ease of assessment, relevance of the dimension for the given task) that affect the choice of dimensions (e.g., content, rhetorical structure, organization, vocabulary, style, grammar, spelling, punctuation, accuracy of meaning) in assessing writing ability of the students. In other words, the quality of students’ writing in the classroom-based assessment would be evaluated better based on the purpose of the assessment (Cohen, 1994; Haines, 2004). If the teacher, for example, wants to identify the students’ knowledge of content, organization, vocabulary, and so on, an analytic scoring scale is a reliable asset. However, applying holistic scoring or impressionistic judgment would result in a somewhat misleading interpretation because assigning one single score would not reflect the ability and knowledge of the students in different writing dimensions (Weigle, 2002). So, it is not right to generalize the given single score to student’s ability in various dimensions of writing, especially in the second language situation, where “different aspects of writing ability develop at different rates for different writers” (Weigle, 2002, p. 114). In this case, impressionistic scoring would not help the teacher and the students in identifying their true writing ability as well as in discriminating the areas of their strengths and weaknesses in writing. In this vein, assessment is considered “the process by which we measure the achievement and progress of the learner” (Haines, 2004, p.31). There are 4

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

some common concerns in assessment that oblige teachers to investigate more on it. Haines (2004, p.7) lists some questions which were asked by novice lecturers in higher education at a training workshop enquiring about their roles and practices in assessment. Some of the questions are: • • • •

I’d like to know how to mark quickly, how do you do it? I want to know how to achieve objectivity and consistency when some aspects are excellent and others are lacking within a piece of work, i.e., how do I deal with a half model answer? I’d like to know about the marking schemes and model answers. How much should the student know? How do you know whether the mark you have given is the ‘right one’?

These kinds of questions are asked all over the world by reflective teachers who want to help their students achieve a higher degree of progress in writing. Haines tries to provide practical answers to such questions throughout her book Assessing Students’ Written Work. It is also advised that teachers should try to make their assessing criteria as explicit as possible and explain to students what standards they are exactly expecting of them (Haines, 2004). For example, the following standard for assessing written work of students is set by a science lecturer: So long as students can show me they understand the concepts, I don’t care if they write notes or bullet points, so long as I can read it and it answers the question – what more could I ask. I haven’t written an essay since I was 15 either. (Haines, 2004, p.63)

So, when students are made aware of the standard, they would try to meet the required criteria. In the following extract, Haines (2004) shows the attitude of a graduate teaching assistant in English Literature towards assessing essays: There cannot be a more concrete experience than confronting a stack of student essays … what did I believe was important, in making a fair assessment of student work? I decided there were three aspects of marking which often occurred simultaneously. One was considering the student’s essay in the sense of what they had written on the page. Another was considering comments to make in response to what students

5

Volume 9 No. 1, Agustus 2009 : 1-21 had written. The third was to assign a grade to what had been written. It was this third aspect which gave me most concern. (p. 89)

However, while assessment criteria would create an equal chance for the students in gaining a high mark (Haines, 2004), familiarizing the students with the evaluation criteria, on which their writing will be judged, would generate a goal-oriented situation in which the students would try to understand and meet their teachers’ expectations for writing (Weigle, 2002). It can thus be concluded that the use of explicit scoring rubrics by the classroom teacher can act as a teaching tool as well as a testing instrument (Hamp-Lyons, 1991c; Ferris and Hedgcock, 1998). In addition, scoring rubric provides a standard for the teacher to score the students’ essays consistently with more reliability (Weigle, 2002). Moreover, over time, through consistent use of scoring criteria the teacher would confidently assign scores and give feedback to students’ texts. (Ferris and Hedgcock, 1998) The idea is implied by what Bachman and Palmer (1996) put as “interactiveness” in test taking that is related to the interaction between the test taker and the test. This interaction is influenced by rating scale if the test-taker knows how his/her writing will be evaluated (Weigle, 2002).This, in turn, would increase the effectiveness of such writing classes in which students are aware of the evaluation criteria and practice more to that end compared to the traditional ones where the teacher’s impressionistic grading is the sole determiner of the quality of students’ writing without revealing the strong and weak aspects of their writing in different dimensions. Based on the related literature reviewed above, this study aims to find out the English teachers’ attitudes in an ESL situation towards the assessment of writing in the classroom. 3. METHOD 3.1. Respondents The respondents of this study (N=35) were 20 university lecturers from Universiti Putra Malaysia and 15 school teachers who were also part-time students continuing their studies in this university. They were 22 females 6

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

(10 university lecturers and 12 school teachers) and 13 males (10 university lecturers and 3 school teachers) whose experiences as English teachers ranged from three months to 25 years. 3.2. Instrument An attitude questionnaire with four questions was constructed to elicit the respondents’ opinions towards the writing assessment. The questionnaire required the respondents to choose among the given options for each question. The questions enquired their preferences about: a) impressionistic and criterion-based types of scoring; b) explicit instruction on evaluation criteria; c) purpose of assessing students’ writing; and d) different dimensions in students’ writing (see Appendix A). The respondents were also interviewed through a semi-structured interview to get further comment for their responses. 3.3. Data Collection The data were collected from two faculties of Educational Studies and Modern Languages in Universiti Putra Malaysia. The questionnaires were distributed among the university lecturers in those faculties in the fields of TESL, English Language, English Literature, and Applied linguistics. The questionnaires were also given to school teachers who were part-time master students in the university. Of all the distributed questionnaires, the total of 35 were returned. Upon the collection of the questionnaires the respondents were interviewed for acquiring more information about the given answers. 3.4. Data Analysis The collected data were analyzed descriptively and qualitatively. The answers for each question were counted for each group of the respondents and then they were converted to percentage for the ease of comparison. Some of the respondents had chosen more than one answer and these answers were counted as many times as they were given. This resulted in an increased number of answers for some questions. The results were shown in figures and tables for the ease of comparison between the groups. For each question the qualitative analysis of the responses on the basis of the given reasons was also conducted to find out similarities and differences among the respondents’ attitudes in both groups. 7

Volume 9 No. 1, Agustus 2009 : 1-21

4. RESULTS The first question enquired the respondents’ opinions on impressionistic and criterion-based types of scoring for assessing students’ writing. Question 1: Which type of scoring do you use for assessing your students’ essay writing? Why?

The overall responses (Figure1-1) of both groups of university lecturers (UL) and school teachers (ST) indicate the high preference of 65.71% (23 out of 35) for the impressionistic evaluation.

80 60

a= Impressionistic b= Criterionbased

40 20 0 UL & ST

a

b

65.71

51.43

Figure1-1. Attitudes of University Lecturers and School Teachers towards type of scoring

However, the separate analysis reflects the high priority (80%) (Figure 1-2 below) that UL give to impressionistic scoring compared to 60% of the ST who prefer criterion type of scoring. In this regard, one ST with 15 years of experience expresses that “all aspects of writing ability can be evaluated in this type of scoring and the teacher can help students find a clear view about their weakness or strength.” Yet another ST with three years of teaching experience says that “my emphasis is more on grammar.”Similarly, a UL with two years of experience indicates that “it is important to consider different criteria to find out the students’ strength and weaknesses.” The responses reflect the remedial and diagnostic purposes as the main reasons of UL and ST, with different years of teaching experience, for choosing criterion-based scoring as an aid for improving students’ writings. 8

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

In contrast, the respondents who prefer impressionistic scoring indicate that it is “more valid and reliable” (UL with 5 years of experience) and that “analytic scoring is more time consuming” (a UL with 8 years of teaching experience). One UL, with 20 years of teaching experience uses both types of scoring depending on the purpose and asserts that, “it depends on the subject, I use impressionistic assessment if the content is more important and analytic if the language is more important.” 100

50

0 UL

a

b

80

45

ST 46.67

60

Figure 1-2. Comparison of the attitudes of University Lecturers and School Teachers towards type of scoring

Question 2: Do you give explicit instruction to your students about the criteria that will be used to evaluate their essay writing? If so, Why?

The second question is related to the first question in that if the teachers emphasize on diagnosing the students’ problematic areas, they should also have concern about finding the solution for eliminating these problems and improving their writing performance. In this case if the criterion-based scoring scales or any teacher-made rubrics are employed, we want to know whether they inform the students of these criteria and train them through instruction to prepare them for better and improved writing. As Figure 2-1 shows 85.71% of all respondents (30 out of 35) agree on giving explicit instruction and informing the students of the evaluation criteria. In this vein, varieties of reasons were identified through their comments. For example, a school teacher with 8 years of teaching experience says that she instructs the students before writing “to allow them to perform better”. Some indicate that explicit instruction “guides the 9

Volume 9 No. 1, Agustus 2009 : 1-21 students in their writing” (ST with 17 years of experience in teaching) and in this case “ the students will have a clearer guideline in doing the writing” (ST with 2 years experience), so they will have a “goal directed writing” (UL with 25 years experience). 100

50

0

Yes

No

UL & ST 85.71 14.28 Figure2-1. Attitudes of University Lecturers and School Teachers towards explicit instruction towards the evaluation criteria

Likewise, separate analysis of the responses (Figure 2-2) indicates the high preference of UL (85%) and ST (86.67%) for explicit instruction of the evaluation criteria to their students. On the other hand, the minority of the respondents (15% UL, 13.33% ST) expressed that they don’t give instruction “because the students should be able to write without instruction as in the real situations” (ST with 4 years of experience) and also “they should write creatively” (UL with 6 years experience). One UL with one year of teaching experience asserts that “I want to know on which part of the writing students put more emphasis without instruction”. The result points out that most of the UL and ST were willing to instruct and inform the students of the evaluation criteria for it would help them to prepare themselves for future writing. 100

50

0 UL

Yes

No

85

15

ST 86.67 13.33

10

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

Figure 2-2: Comparison of the attitudes of University Lecturers and School Teachers towards explicit instruction towards the evaluation criteria

Question 3: What is your purpose of assessing students’ writings? Why?

The purpose of this question was to find out whether the teachers assess students’ writings only for giving scores as one of the requirements of universities or school or they want to improve the students writing through identifying the areas of their weakness and strength. As Figure 3-1 below shows 85.71% (30 of 35) of respondents indicated that they assess their students’ writings to find out their strength and weakness. Only 22.86% (8 of 35) of the responses represented the attitude of giving score as the main reason for the writing assessment. 100 80 60 40 20 0 UL & ST

a

b

22.86

85.71

Figure 3-1. Attitudes of University lecturers and School Teacher towards the purpose of assessing students’ writing a= giving score b= identifying students’ weakness and strength

Different reasons were given for choosing answer “b” by the majority of UL (85%) and ST (86.67%) (Figure3-2 below). For example, a UL with ten years of teaching experience expresses that “I can identify their weakness and focus on those parts more.” Most of them have referred to the issue of writing improvement including a UL with five years of experience who mentions that “I want to improve the students’ writing skill”, as well as “it is better to find the students’ weakness and strength and to tell them in order to improve their weakness” (UL with 8 years experience), because “their progress and effort is so important” (UL with 3 years experience), so “I can guide and give the solution for their weaknesses in writing” (UL with 2 years of experience). Similarly, one ST with three years of experience states that “just giving a score doesn’t solve their problem in writing. They have 11

Volume 9 No. 1, Agustus 2009 : 1-21 to know about their weakness and strength for improving their writing” and another ST with two years of experience indicates her opposing stand for assessing the students’ writing for the sake of giving score and also expecting high scores of them as “giving scores would increase the level of students’ anxiety and thus they may feel it is more difficult to generate ideas while writing.” In this case, a ST with one year of experience refers to a probable strength in this and says that “after identifying their weakness through scores, I will give more exercise on the area where they are weak.” 100

50

0

a

b

UL

25

85

ST

20

86.67

Figure 3-2. Comparison of the attitudes of University lecturers and School Teachers towards the purpose of assessing students’ writing

In the case of the answer “a”, the UL (25%) and ST (20%) had similar reasons for giving scores to students’ writing and they considered it “academic requirement”. A few of the respondents considered both ‘a’ and “b” answers as important “as a form of feedback to students” (UL and ST with 20 years of teaching experience), and that “academic world requires assessing and remedial work” (UL with one year experience). Figure 3-2 reflects the significant difference between the two responses. It shows the high priority that the respondents (both UL and ST) with different years of teaching experiences give to identifying students’ weakness and strength in writing. Another question that was considered of importance to be posed was the matter of the priority and importance the teachers give to different dimensions of writing in the process of evaluation. For this, the following question was given to the respondents: Question 4: Which dimension in students’ writing is more important to you? Why?

12

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

This question is the extension of question 3 because when scoring and identifying the students’ weaknesses and strengths, different teachers may have different priorities for emphasizing one aspect over the others. Figure 4-1 below indicates the high importance both the UL and ST give to the dimensions of “content” (88.57%) and “organization” (82.86%) and the least importance to “mechanics” (22.86%). Medium range emphasis has been given to the dimensions of “cohesion” (54.28%), and “vocabulary” (48.57%) followed by “syntax” (42.86%) respectively. The component of “mechanics” (22.86%) is given the lowest importance. 100

50

0

a

b

c

d

e

f

UL & ST 88.6 82.9 54.3 42.9 48.6 22.9 Figure 4-1. The attitude of University Lecturers and Students Teachers towards the importance of writing dimension a=content, b=organization, c=cohesion d=syntax, e=vocabulary, f=mechanics

This pattern is almost consistent with the separate analysis of the responses (Figure 4-2 below) indicating the high priority the UL and ST give to the dimensions of “content” (95% vs. 80%) and “organization” (90% vs. 73.33%) respectively. Similarly the same prominence has been given to the dimensions of “cohesion” and “vocabulary” by UL (55%) and ST (46.67%). It might be because cohesion is closely related to the proper use of vocabulary in writing. In this regard, a UL with six years of teaching experience indicates the importance of attention to cohesion in writing and puts that “students can learn how to express their ideas cohesively using the language” and “students need to be able to write something where the content is meaningful (able to generate meaning) to audiences. Vocabulary would help a lot and cohesion is important in connecting the ideas” (ST with 2 years of experience). 13

Volume 9 No. 1, Agustus 2009 : 1-21

Student teachers have considered “syntax” more important (53.33%) compared to university lecturers (35%). On the other hand, less emphasis has been given to the dimension of “mechanics” by UL (30%) and ST (13.33%). 100

50

0

a

b

c

d

e

f

UL

95

90

55

35

55

30

ST

80 73.3 46.7 53.3 46.7 13.3

Figure 4-2. Comparison of the attitudes of University Lecturers and Student Teachers towards the importance of writing dimensions

Some respondents have given emphasis on all dimensions stating that “all aspects are taught in the course. As such, the students are expected to apply them in their writing” (UL with 20 years of teaching experience), “because without any of these, writing would not be meaningful” (UL with 5 years experience) and that “all these components contribute to a compact and well organized essay” (ST with 17 years of teaching experience). 5. DISCUSSION AND CONCLUSION The purpose of this study was to find out perceptions of the university lecturers and school teachers toward writing assessment in their classroom with regards to the type of scoring method they employ, the instruction they give to the students on evaluation criteria, the purpose they have for assessing writing, and the hierarchy of importance they give to the dimensions of writing. The results indicate that although about half of the university lecturers opt for the impressionistic scoring for reasons such as ease, validity, and less consumption of time, most of the school teachers prefer analytic scoring for its importance in diagnosing the students’ problematic areas. The important matter is that impressionistic judgment reduces the students’ cognitive and linguistic responses to a single score which cannot show the actual ability of the students in different dimensions 14

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

of writing (Hamp-lyons, 1991c; Weigle, 2002). Conversely, criterion-based evaluation provides sufficient information on the quality of student’s writing and helps the teacher to find out on which component of writing the student has more problem with and requires more practice. Considering the matter of explicit instruction about evaluation criteria, both groups of respondents express that evaluation criteria work as guidelines for students and help them to write purposefully. But not always is the knowledge of the evaluation criteria helpful as it is stated by one of the respondents, because in that situation students feel more anxious to meet the specified criteria and they may not be able to reflect on their actual intentions and audience expectations in writing. This respondent’s idea may sound acceptable in exam-like situations such as timed writing essay tests when the students are required to write based on the specified criteria without previous knowledge of them. On the other hand, students’ awareness of the evaluation criteria through explicit teaching during the inclass writing sessions can eliminate such anxiety-provoking situations. In this way, the students learn how to use the evaluation criteria with its rubrics as guidelines for practicing, evaluating, and revising their own writings before submitting them to the teacher for final evaluation. In this case, some researchers (Hamp-Lyons, 1991; Ferris&Hedgcock, 1998; Weigle, 2002) believe that the use of explicit scoring rubrics can most benefit classroom teachers. Students can be given the rubrics in advance and familiarized with the criteria on which their writing will be assessed or judged. This would make the students have better understanding of their teachers’ expectations for writing and whether their own writing meets those expectations. That might be why most of the respondents indicated that there are students who get the most advantage of the given evaluation criteria for their future writings and they would try to prepare themselves better when they know how their writing will be evaluated. Likewise, the main reason for assessing students’ writing is indicated as for identifying their strengths and weaknesses. This preference was due to the fact that in any educational system, the improvement of students in the subject area is a matter of importance and when their writing is assessed analytically it would provide diagnostic information about their weakness or strength. In this way, using criterion-based type of scoring and stating the criteria explicitly to the students would assist them to find out the problematic areas on which they should practice more to reach the required standards (Weigle, 2002; Haines, 2004). It would also help the teachers to 15

Volume 9 No. 1, Agustus 2009 : 1-21 find the answer to such questions as “I want to know how to achieve objectivity and consistency when some aspects are excellent and others are lacking within a piece of work?”, or “I’d like to know about the marking schemes and model answers. How much should the students know?” (Haines, 2004, p. 31). The results also indicate that fluency in writing is viewed as more important than accuracy. It means that the students should be able to convey their ideas and thought to the reader with whatever they have in their disposal such as using their linguistic and background knowledge in order to produce the content with well formed organization through aspects such as cohesive devices and proper use of vocabulary. Although syntax and mechanics is considered the least important by the respondents, it should be said that there are some teachers who focus on grammatical errors and mechanics in students’ writings and sometimes these dimensions prevent the teachers from noticing the good and positive points of their writing. We find that years of teaching experience is not a factor differentiating respondents’ responses. For example, when a school teacher with 5 years of experience states her preference to explicit teaching of evaluation criteria as “it guides students how to write”, two university lecturers with 10 and 25 years of experience refer to such a matter as “students can use it as guidance” and “goal directed writing” respectively. In the same vein, two university lecturers with 3 and 10 years of teaching experiences indicated that their purposes of assessing students’ writing is to identify their strengths and weaknesses because “their progress and effort is so important” and that “I can focus on those parts more”. Thus, we conclude that in order to improve the writing quality of the students, much more attention should be paid to diagnosing their weaknesses and strengths in writing. In this regard, analytic scoring rubrics can be considered as ideal instruments for evaluating students writing samples. They can also be used as explicit teaching aids to sensitize the students to the requirements for producing high quality writings. REFERENCES

16

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

Bachman, L.F., and Palmer, A. S. (1996). Language Testing in Practice: Designing and Developing Useful Language Tests. Oxford: Oxford University Press. Braddock, R., Lloyd-Jones, R., & Schoer, L. (1963). Research in written composition. Champaign, Illinois: National Council of Teachers of English. Clarke, S. and Gipps, C. (2000). The role of teachers in teacher assessment in England 1996_1998. Evaluation and Research in Education 4, 38_52. Cohen, A. D. (1994). Assessing Language Ability in the Classroom. (2nd ed.). Boston: Heinle &Heinle Publishers. Davison, C. (2004). The contradictory culture of teacher-based assessment: ESL teacher assessment practices in Australian and Hong Kong secondary schools. Language Testing. Vol. 21(3): 305-334 Diederich, P. B. (1946) Measurement of skill in writing. School Review, 54, 584-592. Diederich, P.B. (1974). Measuring growth in English. Urbana, IL: National Council of Teachers English. Diederich, P. B., French, J. W., & Carlton, S. T. (1961). Factors in judgments of writing ability. ETS Research bulletin 61–15. Princeton, NJ: Educational Testing Service. Elbow, P. (1996). Writing Assessment in the 21st century: A Utopian View. In: L. Z. Bloom, D. A. Daiker & E. M. White (Eds.), Composition in the 21st Century: Crisis and Change, (p. 83100). Carbondale: Southern Illinois University Press. Eley, E. G. (1955). Should the General Composition Test be continued? The test satisfies an educational need. College Board Review, 25, 10 -13.

17

Volume 9 No. 1, Agustus 2009 : 1-21 Ferris, D. and Hedgcock, J. S. (1998). Teaching ESL composition: Purpose, process and practice. Mahwah, NJ: Lawrence Erlbaum Association. Haines, C. (2004). Assessing students’ written work: Marking essays and reports. London: Routledge Falmer. Hamp-Lyons, L. (1991a). Basic Concepts. In Hamp-Lyons, L., editor, Assessing second language writing in academic contexts. Norwood,NJ: Ablex. Hamp-Lyons, L. (1991b). Scoring procedures for ESL contexts. In HampLyons, L., editor, Assessing second language writing in academic contexts. Norwood, NJ: Ablex, 241–76. Hamp-Lyons, L. (1995). Rating nonnative writing: the trouble with holistic scoring. TESOL Quarterly, 29, 759–62. Huot, B. (1990). The literature of direct writing assessment: major concerns and prevailing trends. Review of Education Research, 60, 237– 63. Jacobs, H.L., Zinkgraf, S.A., Wormuth, D.R., Hartfiel, V.F. and Hughey,J.B. (1981). Testing ESL composition: a practical approach. Rowley,MA: Newbury House. LIoyd-Jones, R. (1977). Primary trait scoring. In C.R. Cooper and L. Odell (eds.). Evaluating Writing, 33-69. NY: National Council of Teachers of English. Perkins, K. (1983). On the use of composition scoring techniques, objective measures, and objective tests to evaluate ESL writing ability. TESOL Quarterly, 17, 651–671. Raimes, A.(1990). The TOEFL test of written English: causes for concern. TESOL Quarterly, 24, 427–42. Schoonen, R., Vergeer, M. and Eiting, M. (1997). The assessment of writing ability: expert readers versus lay readers. Language Testing, 14(2), 157-184. 18

Perceptions of Malaysian School and University ESL Instructors on Writing Assessment (Jayakaran Mukundan & Touran Ahour)

Weigle, S.C. (2002). Assessing Writing. Cambridge: Cambridge University Press. Weir, C. J. (1988). Construct validity. In A. Hughes, D. Porter and C.J. Weir (eds.). ELTS Validation project report (ELTS Research reports I (ii)). London: The British Council/ UCLES. Weir, C. J. (1990). Communicative language testing, NJ: Prentice Hall Regents. White,

E. M. (1984). Holisticism. College Communication, 35 (4), 400-409.

Composition

and

White, E. M. (1985). Teaching and assessing writing. San Francisco, CA: Jossey-Bass. Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

APPENDIX A. ATTITUDE QUESTIONNAIRE

Dear Respondent, This questionnaire aims to find out your attitudes toward writing assessment in the classroom. Your cooperation for answering the following questions would be highly appreciated. Gender: Male Female Area of specialization…………………………………………..………… Faculty: ………………………………… Teaching experience in writing: …………. Years Position: University lecturer School teacher

19

Volume 9 No. 1, Agustus 2009 : 1-21 Please choose the best answer for the following questions on the basis of your attitude and experience toward writing assessment in your classes. Then give your reason(s) for choosing the answer(s). 1.

Which type of scoring do you use for assessing your students’ essay writing? Why? a. Impressionistic

b. Criterion-based

Because……………………………………………………………………………… ………………………………………………………………………………………. 2.

Do you give explicit instruction to your students about the criteria that will be used to evaluate their essay writing? Why? Yes No

Because……………………………………………………………………………… ………………………………………………………………………………………. 3.

What is your purpose of assessing students’ writings? Why? a. Giving score

b. Identifying students’ weakness and strength

Because……………………………………………………………………………… ………………………………………………………………………………………. 4.

Which dimension in students’ writing is more important to you? Why? (You can choose more than one). a. b. c. d.

content organization cohesion syntax

e. vocabulary f. mechanics (spelling, punctuation)

Because……………………………………………………………………………… ………………………………………………………………………………………. Thanks for your cooperation

20

perceptions of malaysian school and university esl ...

that grammar, vocabulary, and mechanics are the most ..... “mechanics” (22.86%) is given the lowest importance. 0. 50. 100 .... College Board Review, 25, 10.

310KB Sizes 1 Downloads 180 Views

Recommend Documents

perceptions of malaysian school and university esl ...
Perceptions of Malaysian School and University ESL Instructors on Writing Assessment. (Jayakaran Mukundan & Touran .... (1977) primary trait scoring scale (Bachman and Palmer, 1996; Jacobs et al. 1981; Hamp-Lyons, 1991, 1995; ..... ESL teacher assess

Econometrics - UPenn School of Arts and Sciences - University of ...
Apr 22, 2018 - Francis X. Diebold is Professor of Economics, Finance and Statistics at the. University of Pennsylvania. He has won both undergraduate and graduate economics “teacher of the year” awards, and his academic “family” includes thou

CAVENDISH UNIVERSITY SCHOOL OF MEDICINE ...
SCHOOL OF MEDICINE. Foundation Physics Tutorial sheet 1. (January 2017 intake). 1. A warplane moving at the same altitude makes three successive ...

Fordham University School of Law
Sep 30, 2003 - ... without charge from the Social Science Research Network electronic library: ... Both were twenty-eight.1 Over the next seven years, Andrea2 ..... was designing computer systems for NASA.53 Andrea approached him first in ...... nati

VIT UNIVERSITY SCHOOL OF ELECTRONICS ...
Which of the figures above is the best representation of the channel in the schematic on the ... (a) Calculate VOH, VOL, VM of the above inveter. (b) Find VIH, VIL, ...

Download - Northwestern University School of Education and Social ...
Nov 11, 2008 - This article was downloaded by: [Adler, Jonathan M.] ... and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf.

BOSTON UNIVERSITY GRADUATE SCHOOL OF ...
grammar for my conference abstracts, term papers, manuscripts, and this dissertation, ...... For example, in (21), the antecedent of the elided VP go to the ball.

stirling management school scholarship ... - University of Stirling
MBA Only. MBA Leadership for Sustainable. Futures Scholarships. MBA Only ... Title of Masters course applied for (e.g. MSc Marketing). Type of offer received from the ... help you, and detailing your motivations, expectations and educational or profe

KELLEY SCHOOL OF BUSINESS, INDIANA UNIVERSITY
“Best Foot Forward or Best for Last in a Sequential Auction? .... Service to the Department of Finance, Kelley School of Business, and Indiana University.

Primary School Students' Perceptions on Using IT Tools ...
The input systems and the user interface may also need redesign from an education standpoint if they are to support learning for primary school students.

School Around the World - PDF - ESL Worksheet - Faiza Raintree ...
Reem is from Saudi Arabia. She starts school at 6:30 a.m. School lasts until 1:00 p.m. Reem gets home around 1:30 p.m.. Reem does not go to any clubs at ...

School Around the World - PDF - ESL Worksheet - Faiza Raintree.pdf ...
Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... School Around ... Raintree.pdf. School Around ... Raintree.

Changing Threat Perceptions and the Effi cient Provisioning of ...
†Contact Information: The Gabelli School of Business, Roger Williams ..... 9 I assume there is a simple linear technology converting effort to a level of security (by .... BA* and

Teachers' Perceptions of ELL Education
(Office of English Language Acquisition,. Enhancement ... in their pre-service certification programs. (Hargett ..... nator, conferences, phone calls...” Additional ...

Malaysian species list.pdf
Fragmentia of the Flora of the Philippineae 1 (1904) 129. R. Schlechter & O. Page 3 of 14. Malaysian species list.pdf. Malaysian species list.pdf. Open. Extract.

School of Law University of California, Davis - SSRN papers
http://www.law.ucdavis.edu. UC Davis Legal Studies Research Paper Series. Research Paper No. 312. October 2012. Does Geoengineering Present a Moral Hazard? Albert Lin. This paper can be downloaded without charge from. The Social Science Research Netw

financialization of the economy - University of Michigan's Ross School ...
Jan 13, 2015 - building a business that enhances The Coca-Cola Company's trademarks. ... Apple-- which regularly tops the list of the world's most valuable ...

Aviva London School of Economics University of Oxford ...
Apr 20, 2011 - basic financial services to low-income communities around the world to ... Similarly, for many MFIs making agricultural microcredit loans to ...... For illustration, assume that uniform take up is expected across all products in the ri

WIOLETTA DZIUDA - Harris School of Public Policy - University of ...
Assistant Professor, MEDS, Kellogg School of Management, ... Theory and International Trade Meetings, October 2012; North American Winter Meetings ... Economy Conference, Ithaka, June 2015, NBER's Summer Institute, Boston, July 2015 ...

financialization of the economy - University of Michigan's Ross School ...
Jan 13, 2015 - financialization is a potent force for changing social institutions. .... top five hedge fund managers in 2004 earned more than all of the CEOs in the .... Page 10 .... financial media meant that by the late 1990s, firms were under ...

The University of Tokyo Graduate School of Information ...
sources. Semantic annotations represent information contained in text documents in a structured format which are more amenable to applications in data mining,.

wioletta dziuda - Harris School of Public Policy - University of Chicago
Microeconomic Analysis, 1997-2000. Higher School of ... Microeconomic Analysis, 2005, 2006. Kellogg School of ... 2008-2013. Business Analytics, 2013-2014.