JLLT Volume 8 (2017) Issue 1

Journal of Linguistics and Language Teaching

edited by Thomas Tinnefeld

Volume 8 (2017) Issue 1

JLLT Volume 8 (2017) Issue 1

JLLT is an academic organ designed for the worldwide publication of scientific findings which concern the full range between linguistics on the one hand and language teaching on the other. At the same time, it is a basis of discussion for linguists and practitioners of language teaching. JLLT is a refereed journal. All manuscripts, apart from those having individually been requested by the editor, have to be positively evaluated by two referees, this procedure being totally anonymous on both sides (authors and referees). Only then will they be published.

Addressees of JLLT: 

linguists and foreign language methodologists - from university professors to PhD students and teachers at universities and all types of schools;



young scientists who will find a publication platform for their academic projects which they can open up for discussion so as to get fruitful advice from the community of readers and authors.

Which text types will be accepted? 

articles



book reviews



reports about scientific projects and conferences



reports about innovative study programmes



reports about Ph.D. projects (for the publication and the protection of intermediate research results) as pre-publications.

The publication process can formally be described as follows: 1. Receipt of a manuscript 2. Pre-screening of the manuscript (editor) 3. Evaluation of the manuscript (editorial board) 4. Positive result: publication of the article on a separate page of the Journal's website. Thus, quick publication of the manuscript (about six to eight weeks after receipt) and availability for the academic world. 5. After receipt of all the parts of the given issue of the Journal: publication of the article in the PDF format, the web page version of the text being kept. Completion of the publication process.

Date of publication: July 28, 2017

JLLT Volume 8 (2017) Issue 1

Editorial Advisory Board (in alphabetical order) Prof. Dr. Klaus-Dieter Baumann - Universität Leipzig, Germany Prof. Dr. Dr. h.c. Wolfgang Blumbach, M.A. - Hochschule für Technik und Wirtschaft des Saarlandes, Germany Prof. Dr. Didi-Ionel Cenuser - Lucian Blaga University, Sibiu, Romania Prof. Dr. Wai Meng Chan - National University of Singapore, Singapore Prof. Dr. Shin-Lung Chen - National Kaohsiung First University of Science and Technology (NKFUST), Taiwan Prof. Dr. Inez De Florio-Hansen - Universität Kassel, Germany Professor Gerald Delahunty, PhD - Colorado State University, Fort Collins (CO), USA Professor Frédérique Grim, PhD - Colorado State University, Fort Collins (CO), USA Professor Eli Hinkel, PhD - Seattle University, Seattle (WA), USA Prof. Dr. Frank Kostrzewa - Pädagogische Hochschule Karlsruhe, Germany Prof. Tsailing Cherry Liang, Ph.D. - National Taichung University of Technology, Taiwan Prof. Dr. Heinz-Helmut Lüger - Universität Koblenz-Landau, Germany Prof. em. Dr. Heiner Pürschel - Universität Duisburg-Essen, Germany Prof. Dr. Günter Schmale - Université de Lorraine-Metz, France Prof. Dr. Ulrich Schmitz - Universität Duisburg-Essen, Germany Prof. Dr. Christine Sick - Hochschule für Technik und Wirtschaft des Saarlandes, Germany Prof. Dr. Veronica Smith, M.A. - Alpen-Adria Universität Klagenfurt, Austria Prof. Dr. Bernd Spillner - Universität Duisburg-Essen, Germany

JLLT Volume 8 (2017) Issue 1

JLLT Volume 8 (2017) Issue 1

Table of Contents Editorial ...........................................................................................................

7

Foreword to the Issue …………………………………………………....................................................

9

I. Articles Randall Gess (Ottawa, Canada): Using Corpus Data in the Development of Second Language Oral Communicative Competence .........................................................................

13

Siaw-Fong Chung (Taipei, Taiwan (R.O.C.)): A Corpus-Based Approach to Distinguishing the Near-Synonyms Listen and Hear .........................................................................................................

34

Jennifer Wagner (Clio (MI), USA): A Frequency Analysis of Vocabulary Words in University Textbooks of French ............................................................................................................

55

Norman Fewell & George MacLean (both Okinawa, Japan): Transforming Can-Do Frameworks in the L2 Classroom for Communication and Feedback ................................................................................................

75

Kay Cheng Soh & Limei Zhang (both Singapore): The Development and Validation of a Teacher Assessment Literacy Scale: A Trial Report .................................................................................................

91

II. Book Review Bernd Klewitz (Jena / Göttingen, Germany): Inez De Florio: Effective Teaching and Successful Learning. Bridging the Gap between Research and Practice. New York et al: Cambridge University Press 2016 ....................................................................................................

5

119

JLLT Volume 8 (2017) Issue 1

JLLT Volume 8 (2017) Issue 1

Editorial In the eighth year of its existence, it is time to draw a short balance of JLLT's history in quantitative terms and also to announce a new development. As far as the journal's quantitative impact is concerned, JLLT has reached readers in the whole northern hemisphere and further on down to South-East Asia. Among the Top-10 countries from where readers have had access to the Journal in the past few years are Germany, the United States, Russia, France, China and the United Kingdom. The number of page views of the articles published in the Journal's archive has nearly reached 50,000. This means that JLLT is taken note of in large parts of the world. These figures point to the fact that JLLT is generally acknowledged and has a sound basis of readers, many of whom may turn out to be JLLT's authors one day. Thus, JLLT has made its way, and we will do our best to have it continue in the same vein. The international character of JLLT has now led to a new development, i.e. the extension of the Editorial Board, which is now the Editorial Advisory Board. In this context, it will be of importance to find even more researchers who will bring in their expertise in their various specialities. In this spirit, we are delighted to have three new researchers join our Board. These are Professor Eli Hinkel from Seattle University (USA), Professor Gerald Delahunty from the English Department of Colorado State University (USA), and Professor Frédérique Grim, from the Department of Languages, Literatures and Cultures of the same university. As it is impossible to briefly describe Professor Eli Hinkel's extensive research from teaching grammar and academic writing via curriculum design to interculturality -, it may here be stressed that the very outcome of language teaching and language acquisition is her utmost interest, and she is masterly in her ability to break down complex matters to a level that is easy to understand for students. Professor Gerald Delahunty's main research interests lie in pragmatics, syntax theory and sociolinguistics, and his focus is the creation of meaning and its shaping into linguistic form on the basis of the underlying pragmatic principles. He is a linguist with all his heart and will therefore further strengthen the linguistic side of JLLT. Professor Frédérique Grim is a French-born American researcher in the fields of foreign language teaching, content-enriched instruction, and pronunciation. Being a linguistics and language-course instructor, she stands for the practical side of language teaching and will reinforce this field in JLLT. 7

JLLT Volume 8 (2017) Issue 1

Let us welcome these three researchers to JLLT and look forward to working with them and benefit from their expertise. In the same breath, I would like to thank all the members of the JLLT Board for their long-term support and enthusiasm. Thomas Tinnefeld JLLT Editor

8

JLLT Volume 8 (2017) Issue 1

Foreword to the Issue The present issue of JLLT, which I am happy to present, unites five academic articles and one book review. Three of these article cover the field of corpus linguistics, which, thus, forms an unofficial section in this issue. The other two articles are on language instruction and teachers' knowledge and abilities on language assessment. The first article on corpus linguistics reports a study by Randall Gess (Ottawa, Canada) on the use of corpus data for the development of oral communicative competence of French. The corpus used for this purpose was established in the framework of the Phonology of Contemporary French project, which also comprises pronunciation. In his article, the author focuses on the phenomenon of word-final cluster simplification, which is rather frequent in Canadian French. First, the general potential of the corpus data for use in the French-language classroom is presented. The considerable advantages of corpus linguistics for students are then pointed out, such as extensive natural input, the opportunity of comparing spoken and written language and raising students’ awareness for the different varieties of French. Apart from these points, corpora also provide cultural information as well as an insight into sociological variation. In her article, Siaw-Fong Chung (Taipei, Taiwan (R.O.C.)) also utilises existing corpora and distinguishes two near-synonyms of English - the verbs hear and listen - using WordNet on the one hand and the British National Corpus on the other. Her corpus analysis comprised distributional and collocational information on these two verbs. A simplified version of these findings was then presented to a group of students who were given a writing task, based on visual elements, in which they were to use these two verbs, without knowing, however, that these very verbs were in the focus. One of the findings of this study was that the literal meanings of the two verbs in question were predominant both in the corpora and in students writings. It may be added here that making copus data available to students in a pedagogical manner will be of utmost importance for the improvement of teaching foreign languages in the years to come. Also within the realm of corpus linguistics and also with respect to French, Jennifer Wagner (Clio (MI), USA) presents an analysis of the frequency of lexical items in textbooks of French designed for use at universities. Whereas the frequency of the lexical items to be learnt is considered as a matter of fact in the making of textbooks of English, this point is not so clear as far as French textbooks are concerned. For her study, the author compared twelve first-year and six second-year textbooks for university use, published in the U.S., to a frequency dictionary of contemporary French. The most important finding of this study is that the textbooks analysed did not provide a sufficient number of high frequency words, which are of utmost importance for communication of French at a basic level. As a consequence, this study points to the importance of an urgently needed modernisation of French textbooks that would entail the use of corpora for 9

JLLT Volume 8 (2017) Issue 1

their creation. From a general point of view, it is to be expected that for Spanish, things may not be much different, and studies of this kind will be of help so as to clarify the situation there. These three article stand for the importance of linking corpus data to the teaching and learning of foreign languages and, hence, forms part of a relatively new tradition which will certainly be further strengthened in the future. Tackling a totally different topic, Norman Fewell & George MacLean (both Okinawa, Japan) present the results of a transformation of the NCSSFL-ACTFL Can-Do statements for communication and feedback in the English-language classroom in Japan. These can-do statements should not only be used receptively, i.e. for the description of communicative tasks, but also, as the findings suggest, in a productive way, i.e. for students to directly benefit from of them. Asking students what they themselves think they can or cannot do provides them with an incentive to make reflexions on what they really need in their language learning. Assuming that believing that one is able to perform a given task and performing it in reality are two sides of the same coin, the authors transformed a selected set of can-do statements into such statements that motivated students to show that they were able to perform a given task, thus modifying these statements into communicative group activities. This study represents a promising starting point for further research into this question. A topic which is rather different from the other ones elaborated in this issue, but which is of utmost importance in the context of language teaching as well is the field covered in the study by Kay Cheng Soh & Limei Zhang (both Singapore), who who aim to boost research into teacher assessment literacy, i.e. the ability of teachers to understand test results. The new scale which the authors present here and which is to fill a research gap covers four different aspects of assessment literacy. Teachers’ need to understand what assessment is and what functions it has. Teachers further need to be able to design and use different forms of items which respond to their students' instructional needs. Teachers must be able to reliably interpret assessment results. And finally, teachers must be capable of evaluating test results in terms of their innate quality. The authors find their study encouraging in such a way that it provides a sound basis for continuing research on a wider scope - an estimation which is fully supported here. I am sure that the variety of topics unfolded in this issue and the depth of corpus linguistics, which is present in three of the five articles, make this issue an interesting and instructive read. In this sense, I would like to wish our readers some hours of intense enjoyment. Thomas Tinnefeld JLLT Editor

10

JLLT Volume 8 (2017) Issue 1

I. Articles

JLLT Volume 8 (2017) Issue 1

JLLT Volume 8 (2017) Issue 1

Using Corpus Data in the Development of Second Language Oral Communicative Competence Randall Gess (Ottawa, Canada)

Abstract (English) The present paper describes how a large corpus of spoken French, stemming from the international Phonology of Contemporary French (PFC) project, can be used in the development of second language oral communicative competence, with a non-exclusive focus on pronunciation. Following a brief overview of the PFC project, data from one survey point of this corpus will be provided, illustrating a widespread phenomenon in Canadian French: word-final cluster simplification. It will be shown how and to what ends the data can be exploited for classroom use. For students, the potential benefits of using corpus data are manifold. These include greater learner autonomy, quantitatively and qualitatively rich natural input, excellent points of comparison between written and oral language, exposure to numerous and diverse varieties of spoken French as well as to rich cultural information from across the francophone world and, last but certainly not least, a raised awareness of different dimensions of sociolinguistic variation. Keywords: Spoken language corpus, oral communicative competence, pronunciation teaching Abstract (Français) Cet article décrit comment on peut utiliser un corpus important du français parlé, provenant du projet international Phonologie du Français Contemporain (PFC), dans le développement de la compétence communicative orale d’une deuxième langue, avec une attention non exhaustive attribuée à la prononciation. Suite à un bref survol du projet PFC, il est présenté des données d’un point d’enquête de ce corpus, qui illustre un phénomène répandu du français canadien: la simplification des groupes consonantiques finales. Il sera également démontré des chances d’exploitation des données dans la salle de classe, et les fins liées à celles-ci. Pour les étudiants, les avantages potentiels sont nombreux. Ceux-ci comprennent une autonomie d’apprentissage plus importante, de l’input naturel riche du point de vue quantitatif et qualitatif, d’excellents points de comparaison entre la langue écrite et la langue parlée, l’exposition à des variétés nombreuses et diverses du français parlé ainsi qu’à de riches informations culturelles de partout à travers la francophonie et enfin, une connaissance approfondie des différentes dimensions de variation sociolinguistique. Mots clés: Corpus de langue parlée, compétence communicative orale, l’enseignement de la prononciation

13

JLLT Volume 8 (2017) Issue 1

1 Introduction Given the impressive rise of corpus linguistics over the past few decades, the dearth of research on the use of corpora for the development of second language (L2) pronunciation, a crucial aspect of L2 oral communicative competence (OCC), is somewhat surprising. Works treating the use of corpora in the teaching of language and linguistics start to appear in the 1990s (Knowles 1990, Sinclair 2004, Wichmann et al. 1997). However, within this body of work the focus is largely on written corpora. Notable exceptions are chapters on using a spoken German corpus to determine things like vocabulary frequency, and on teaching intonation to students of English phonetics and phonology in Wichmann et al. (1997) (Jones 1997, Wichmann 1997); and chapters on authenticity, communicative utility, and formulaic expressions in English, and on the use of concordancing in the teaching of Portuguese in Sinclair (2004) (Mauranen 2004, Santos Pereira 2004). Of these works, none have a focus on pronunciation, although they do have relevance to OCC more generally, to varying degrees. The first research evidence on the use of corpora for teaching pronunciation is, to my knowledge, the very short article by Gut (2005). Besides a PFC-related publication that will be discussed shortly, the only other available research product focused on using a corpus for teaching pronunciation is the website at the Hong Kong Institute of Education, A Corpus-Based Pronunciation Learning Website (Chen et al. 2014). It is interesting that the corpus of focus in both Gut and Chen et al. is a learner corpus: German-speaking learners of English and English-speaking learners of German in the case of Gut, and Chinese-speaking learners of English in Chen et al. The work of Detey et al. (2010), based entirely on the Phonologie du Français Contemporain (PFC) project (or rather of its offshoot project, the PFC-Enseignement du Français or PFC-EF), therefore represents an important landmark development. It is important to note, however, that the focus of the PFC-EF is much broader than the teaching of pronunciation – it explores the use of the PFC corpus for the development of OCC generally, as well as for focusing on grammatical form, and even for the development of writing, the latter principally by way of explicit stylistic comparison between written and spoken language. In this article, data from one PFC survey point will be provided, illustrating a single aspect of French phonology. The survey point is a community called Maillardville, in Coquitlam, British Columbia, and the aspect of French phonology is the reduction of word-final consonant clusters. Before turning to the relevant data, first, a brief overview of the PFC project will be given and then, a basic description of word-final cluster simplification in French will be added. After going over the data from the PFC survey point in question, it will be shown how it can be exploited in the classroom, and for what purposes. Finally, I will outline what I see as the many benefits to using corpus data for teaching aspects of pronunciation and other aspects of OCC.

14

JLLT Volume 8 (2017) Issue 1

2 The PFC Project The goal of the PFC project (http://www.projet-pfc.net) is to describe the pronunciation of French in all its geographic, social, and stylistic diversity. To this end, the project seeks to build a vast corpus of French as it is spoken around the world, based on surveys conducted by an international team of researchers and their students, using a common protocol, as well as common methods and tools for analysis. The reasons for doing so are not purely descriptive. The envisaged corpus can also serve to test current models of phonetics and phonology, to encourage the sharing of research, to provide for the renewal of data informing the teaching of French, as well as simply to preserve a crucial part of the patrimoine linguistique of the francophone world (Durand, Laks & Lyche 2002, 2009). The ideal PFC survey point involves 12 speakers representing both genders, a minimum of two age groups, and some differences in level of education and / or professional profile. Speakers complete four tasks:



a guided conversation designed to gather basic information about the speaker and her or his linguistic background;



a free conversation of approximately 30 minutes with a fellow member of the speech community;



the reading of a common word list (94 words (plus an additional, tailored list of 115 words, for Canadian speakers)); and



the reading of a common text (an invented three-paragraph news article).

At least one interviewer per survey point should belong to (or be well known to) the community of speakers – this is usually the partner for the free conversation task. The four tasks are designed to elicit different registers of the language, from very careful and monitored (reading words in isolation) to casual and unmonitored (free conversation with someone familiar to the speaker). It should be noted that these are ideals that are not always achieved. For example, for the survey point to be discussed here, there was no interviewer known to the participants and, although one did live in the same town, she was not perceived as a member of the community. Among the targets for analysis are the vowel inventory (contrasts, allophonic realizations), the consonant inventory (e.g., h-aspiré, the rhotic, the palatal nasal, allophonic realizations), realization of schwa (at prosodic boundaries, in different positions within the word, in monosyllables (clitics), in schwa sequences), realization of liaison (so-called “obligatory” and “optional”), and prosody. Of course, any other aspect of phonology may be analyzed, as is the case here, where we will look at the reduction of word-final consonant clusters. 15

JLLT Volume 8 (2017) Issue 1

The PFC-EF, an off-shoot of the larger PFC project, is designed principally for those teaching French and / or developing French pedagogical materials. PFC data is used to develop materials for the teaching / learning and the diffusion of French, representative of the variation found across the francophone world. The goal of the PFC-EF is to provide rich and diverse classroom materials for listening and speaking, for comparison to written language and to le français de référence (Morin 2000) as well as for analysis of variation that exists across the francophone world. These resources should be useful for teachers of French as a first, second, or foreign language. The audience assumed here is one consisting of universitylevel learners of French as a second language in a predominantly English-speaking part of Canada. The context envisioned here is a third-year course on oral expression, although any pedagogical suggestions made are easily transferable to other contexts with minor or major adaptations as required.

3 Word-Final Consonant Cluster Reduction in French The simplification of word-final consonant clusters is a widespread phenomenon in French, and more so in Canadian French than in European varieties (see Milne 2016 for a close examination of the treatment of final clusters in the two dialects). Most clusters in question, and all of those that will be discussed here, are in final position following a historical deletion process targeting word-final schwa. Current scholarship therefore assumes the absence of an underlying final schwa in the relevant forms (Côté 2004), although its former presence continues to be reflected in orthography with a final e. When a schwa is pronounced following the clusters in question, it is considered to be epenthetic. While the assumption of underlying representations without a schwa is not absolutely critical, it does have implications for pedagogical approach and learning outcomes that will be discussed later. For simplicity, the focus here is only on clusters of the type obstruent+liquid (OL) (for an extensive treatment of all types of word-final consonant clusters, the reader is referred to Côté (2004)). OL consonant cluster reduction is illustrated in the following examples taken from Ostiguy & Tousignant (2008), which are represented based on orthography (abbreviated in the case of the reduced forms). (Recall that the final orthographic e in the non-reduced forms is not pronounced): possib (< possible)

sob (< sobre)

peup (< peuple)

prop (< propre)

souf (< souffle)

poud (< poudre)

règ (< règle)

let (< lettre)

spectac (< spectacle) (Ostiguy & Tousignant 2008: 173)

Ostiguy & Tousignant describe word-final cluster simplification as being very common in spoken Quebec French, so much that even cultivated speakers or speakers attending to their speech will not notice it and will assign no negative judgement to it. Indeed, the phenomenon is so pervasive that the authors actually 16

JLLT Volume 8 (2017) Issue 1

pose the question of whether the non-reduced variants should be brought to students’ attention. They come to an affirmative conclusion, saying that their existence in formal spoken Quebec French fully justifies doing so (2008: 177). Debating whether students should be guided to notice non-reduced variants presupposes that the reduced variants are the unmarked target form. Should the latter therefore serve as the pedagogical norm? The data discussed in the following section, as limited as they are, shed interesting light on this question, to which we return afterwards.

4 Data The data in this section come from a single, male speaker from the PFC survey point at Maillardville (Coquitlam, British Columbia). At the time of the recording (July 20061), the speaker was 62 years old. He was born in Winnipeg, Manitoba, to francophone parents, and the family moved to Maillardville when he was four years old. He is a highly educated speaker, having an advanced degree in education and in his career not only taught French, but also played an important role in the development of French immersion programs in the public school system. It is the professional background of this speaker which determined the use of his data for the present study, so as to show that the nonstandard feature described and analyzed here is not at all representative of "unsophisticated" speech. The data come from the four task conditions – the reading lists, the text, the guided conversation (a six-minute extract), and the free conversation (a six-minute extract). Recall that for Canadian survey points, there are in fact two reading lists. The first contains 94 items and targets a variety of sounds and contrasts considered as important points of potential variation across varieties. The second list consists of 115 items targeting phenomena more common in varieties of French in Canada, including word-final consonant cluster reduction. The guided conversation was conducted by the author of this article, who was a visitor to the community, unknown to any of its members. The second interviewer was the one described previously, living in the same town, but not perceived as a member of the Maillardville community, and unknown to any participants. It was this interviewer who engaged in the free conversation task with the speakers. Because she was unknown to the participants, little or no real difference is expected between the guided conversation and the free conversation.

1

The relatively long gap between the retrieval of our data and their methodological analysis here is partly due to the priority of phonological data analysis in the PFC project, with any further applications, such as methodological ones, being of secondary importance only. 17

JLLT Volume 8 (2017) Issue 1

4.1 The Word Lists In the first word-list, there are only three items with the target sequences: peuple, meurtre, and feutre. Of these three, none are reduced by the speaker. Indeed, one of them, peuple, is produced with an epenthetic final schwa. However, in reading the list of words, participants are also asked to say aloud the number for each. The 94 items give seven instances of the word quatre (presented to the reader in number form (4), not orthographically): quatre (4), vingt-quatre (24), trente-quatre (34), quarante-quatre (44), cinquante-quatre (54), soixante-quatre (64), and quatre-vingt-quatre (84). Of these seven items, there is only one token without a reduced form, which happens to be the shortest and the first of the series, i.e., quatre. The astute reader will have noticed that there is a second token of quatre in quatre-vingt-quatre. In this case, the schwa is lexicalized in the collocation quatre-vingts, and its realization is exceptionless across speakers and varieties. Given that the form is lexicalized, we can consider the OL sequence as wordinternal, not word-final. The second list has many more target sequences, having been designed specifically for Canadian French, where final cluster reduction is noted to be more widespread. In fact, there are 19 such items, as follows: mettre

maître

sable

libre

couple

ministre

neutre

jungle

tabernacle

le prêtre

aveugle

convaincre

vinaigre

orchestre

ombre

épingle

arbre

cent piastres sur la table

Of these 19 items, none is pronounced with a reduced cluster. Five are pronounced with a final audible schwa. With this longer list, there are eight instances of quatre (4): quatre (4), vingt-quatre (24), trente-quatre (34), quarantequatre (44), cinquante-quatre (54), soixante-quatre (64), quatre-vingt-quatre (84), cent quatre (104). Of these, all are pronounced with a reduced cluster. The results of the word lists are summarized in Table 1: Forms

Word-List Items

reduced

quatre

0 (0%)

14 (93%)

unreduced (no schwa)

16 (73%)

1 (7%)

unreduced (with schwa)

6 (27%)

0 (0%)

Table 1: Reduction of Final Clusters in Word Lists

It is clear from Table 1 that the list-reading condition disfavours reduction, since there are no reduced forms produced at all. Indeed, to the extent that OL se quences are not tolerated in word-final position (27%), they are resolved via schwa epenthesis rather than by reduction. For the number quatre, quite the 18

JLLT Volume 8 (2017) Issue 1

opposite holds, with reduction being the near absolute rule (93%).

4.2 The Reading Text The reading text contains 12 tokens with a final cluster, seven of which are instances of the same word, Ministre (from Premier Ministre). The other five items are titres, autre, membre, articles, and centre. Of the 12 tokens, ten are pronounced with a reduced cluster (indeed, in six of them, the cluster is deleted). Only autre and centre are produced without reduction, and both with an epenthetic schwa. These items occur in the sequences un autre côté and au centre d’une bataille. All three of the reduced forms similarly occur before a consonant, so the phonological environment appears not to be relevant. On close inspection, one is tempted to say that Ministre has been lexicalized with the entire OL sequence deleted (i.e., as [minis]). The word is pronounced with a [t] only once, in the second token, where a vowel follows. However, a vowel also follows in the first token, and no [t] is pronounced. In the lone token with the [t], the target sequence is ‘Le Premier Ministre a en effet décidé… ,’ and the speaker stumbles quite noticeably. Immediately following the sequence [minis], he pronounces a glottalized [t] followed by the incorrect vowel [ɛ], followed by another, clearer [t] followed by [a], then a longish glottal stop followed by the correct formulation for ‘a en effet’. Otherwise, the form [minis] occurs in all environments: before a vowel, before a consonant, at the end of a phonological phrase, and at the end of a phonological utterance. The results of the reading text are summarized in Table 2: Forms

Ministre2

Other Tokens

deleted

7 (100%)

0 (0%)

reduced

0 (0%)

3 (60%)

unreduced (no schwa)

0 (0%)

0 (0%)

unreduced (with schwa)

0 (0%)

2 (40%)

Table 2: Deletion and Reduction of Final Clusters in the Reading Text

Again, we see that in the form Ministre, deletion is categorical, and perhaps for the entire OL sequence rather than just for the liquid member of the cluster. Otherwise, it appears that the reading text condition favours reduction, 60% to 40%, although the number of tokens is rather limited. Interestingly, no OL clusters surface unaltered – they are either reduced or followed by an epenthetic schwa.

2

Actual forms of Ministre produced were as follows: [minis(ʔ#)lɑse], [minisʔnə], [minis(ʔ#)lə], [minis##], [minispuɹ]. 19

[minisʔiɹa],

[minist ʔɛ],

JLLT Volume 8 (2017) Issue 1

4.3 The Guided Conversation Seven tokens with final consonant clusters occur in the six-minute extract from the guided conversation. The words that appear are eux-autres, couvre, autre, favorable, prendre, peut-être, and kilomètres.3 Of the seven tokens, five are produced with a reduced cluster. The two non-reduced tokens are in couvre (“… ça couvre la la …”) and kilomètres (“… une vingtaine de kilomètres d’ici…”). All of the other forms occur before a consonant, with the exception of peut-être, which occurs before the hesitation form euh (and is reduced), so as for the reading text, phonological environment appears not to be relevant to reduction. The results of the guided conversation are presented in Table 3: Forms

Tokens

reduced

5 (71%)

unreduced (no schwa)

0 (0%)

unreduced (with schwa)

2 (29%)

Table 3: Reduction of Final Clusters in the Guided Conversation In the guided conversation, reduction is clearly favoured, at 71%. It is a lso noteworthy

that, as was the case for the reading text, OL clusters never surface unaltered – when not reduced, they are pronounced with a following epenthetic schwa.

4.4 The Free Conversation There are 16 tokens with final consonant clusters in the six-minute extract from the free conversation. The relevant words are nous-autres (x4), apprendre, autre (adj.) (x3), autres (n.) (x2), répondre, notre, exemple, incroyable, peut-être, and favorable. Of the 16 tokens, 11 are reduced. The results of the free conversation are presented in Table 4: Forms

Tokens

reduced

11 (68.75%)

unreduced (no schwa)

3 (18.75%)

unreduced (with schwa)

2 (12.5%)

Table 4: Reduction of Final Clusters in the Free Conversation

The pattern of reduction in the free conversation quite closely mirrors the pattern 3

Two instances of Notre-Dame-de-Lourdes occur, in which the OL sequence of Notre is unreduced and followed by schwa. One can assume this form to be lexicalized with schwa like quatre-vingt above, and so the OL sequence is excluded from our data as word-internal rather than word-final. 20

JLLT Volume 8 (2017) Issue 1

in the guided conversation (as we expected, given that we did not have a real community member to participate in the free conversation).

4.5 Discussion The question we asked before presenting the data was whether reduced forms should serve as the pedagogical norm. To facilitate consideration of this question, Table 5 provides a synthesis of the information from Tables 1 through 4, leaving out the tokens of quatre (4) from the word-list condition and those of Ministre from the reading text condition, for the same reasons they were treated separately in Tables 1 and 2: Forms

Word lists

Text

Conversations

reduced

0%

60%

70%

unreduced (no schwa)

73%

0%

13%

unreduced (with schwa)

27%

40%

17%

Table 5: Reduction of Final Clusters across Conditions (non-lexicalized tokens only)

From the limited data from this single speaker, it would seem that a strong argument can be made for the reduced form serving as the pedagogical norm, at least for spoken French in Canada. A clear majority of word-final clusters are reduced in both natural (i.e., non-list) reading, and in conversations (more or less formal to the extent that formality varied in the conversations for this survey point). This is a highly conservative count, given that the number quatre (4) is reduced in the reading task at a rate of 93%, and Ministre is reduced without exception in the reading text condition, and virtually to the point of having no relevant OL sequence! Only in the reading of the word-lists were final clusters unreduced. Clearly, we do not want learners to sound stilted in natural speaking conditions, and this seems a real risk if we use the unreduced form as a pedagogical norm. On the other hand, in order to guide learners to the correct lexical representations (unreduced, without schwa), exposure to the word-list data appears crucial. It is only from this data that learners can be reasonably expected to arrive at the underlying form without schwa since it is a minority variant in conversation and does not occur in the reading text at all, whereas the unreduced form with schwa occurs as a minority pronunciation in all conditions, and a fairly robust one in the reading conditions (the word-lists and especially the text). Learners could be led to the correct form by way of formal explanation – i.e., that variant pronunciations can be derived from the unreduced form without schwa via a simple, one-step change in one of two possible directions (consonant deletion or schwa epenthesis) – but our goal is to focus on pronunciation, not to train phonologists! Presenting the forms from the word lists is a far less frustrating way to guide learners to the correct lexical representation.

21

JLLT Volume 8 (2017) Issue 1

It is legitimate to ask whether it is even necessary to attend to the development of underlying lexical forms, but if a learner’s base form contains a final schwa, there is a risk of highly unnatural, hyper-articulated speech, the avoidance of which is an important goal of pronunciation training. On the other hand, if a learner posits the reduced form as the lexical representation, the risk is of inappropriately informal speech across contexts, as well as of outright pronunciation errors if an incorrect second consonant of a given cluster is realized in some instances (if the second consonant is not in the lexical representation, its accurate recovery in production is unsure). Another point to consider is that students will almost certainly come with some relevant lexical representations already formed. Part of the goal may therefore to be to correct those that are wrong and give rise to unnatural pronunciations. The up-shot of the preceding discussion is that it is important to present to learners a full range of appropriate input across speech styles – precisely what the corpus affords (rich input is discussed in some detail below). The arguments presented here with respect to word-final clusters are easily transferable to other aspects of pronunciation.

5 Exploiting the Data for Teaching Pronunciation We now turn to the question of how the data we have seen in the previous section can be put to use for teaching pronunciation. In fact, corpus data can be useful to at least three of four crucial aspects of pronunciation learning: creating lexical representations, understanding pronunciation rules (i.e., developing declarative knowledge), and developing automaticity. Its usefulness to a fourth aspect of learning – developing procedural knowledge – is not obvious, although some researchers consider the step from declarative knowledge to procedural knowledge somewhat trivial in the case of at least some aspects of pronunciation. For example, Dalton & Seidlhofer assert that, “once learners know that simplifications are normal, they are often able to convert this knowledge into active, procedural knowledge with astounding ease” (Dalton & Seidlhofer 1994: 116). Reed (2016) provides two simple classroom strategies for bridging “the declarative to procedural knowledge gap” (Reed 2016: 237) for more problematic aspects of pronunciation. Of course in our case, the procedural knowledge involved seems quite unproblematic – it simply entails the inhibition of articulators in a certain specific phonological structure. The following sections, in turn, focus on creating lexical representations through the provision of rich input, building declarative knowledge, and developing automaticity.

22

JLLT Volume 8 (2017) Issue 1

5.1 Providing Rich Input A key aspect of teaching pronunciation is modeling – the provision of input targeting a specific form or structure. The most obvious benefit of using a large corpus is that it affords a quantitative and qualitative richness in terms of modeling that is unparalleled. In the case of word-final clusters, and our single speaker from our single survey point, we have 75 relevant tokens. There are 16 other speakers from our survey point, although the norm is 12, and there are 38 survey points with data made available online on the PFC project website referred to earlier. For each survey point, one can access data (sound files and transcription) by speaker for the word list(s), the reading text, six minutes of the guided conversation, and six minutes of the free conversation. This adds up to a large quantity of valuable data. In terms of quality, we have seen that the PFC protocol results in data from up to four speech styles, depending on how distinct the guided and free conversations are (which, in turn, depends on whether the conversation partner for the free conversation is previously known to the speaker – a weak point of our own survey point, as discussed earlier). The data available through the PFC project also span a number of dialect areas across the francophone world, therefore providing ready access to data that was previously extremely difficult to come by, if not impossible in some cases. The naturalistic nature of the input also contributes to its high quality. While it is certainly true that reading a word list and reading a prepared text are not generally considered 'authentic language', these are instances of native speakers reading for a non-pedagogical purpose. And, of course, the data from these contexts are complemented by the data from conversations. The unquestionably naturalistic nature of the latter ensures (or at least militates in favour of) variety in terms of the phonological environments that target phonological forms occur in. So the form in question will be heard not only in phrasefinal position, but rather (as we have seen), phrase-internally before both consonants and vowels, and the end of a phonological phrase, and at the end of a phonological utterance. The most important role of the quantitatively and qualitatively rich input in the acquisition process is to facilitate the building of lexical representations. If we follow Bybee (2000, 2001) in assuming that lexical representations are basically clusters of exemplars that can be continuously updated, then exposure to multiple tokens (exemplars) serves to enrich representations, providing phonetic detail and information regarding permissible surface variations. Exposure over time and across different contexts will aid learners in associating certain exemplar types (i.e., full or reduced forms) to appropriate speech contexts (more or less formal). Students can be exposed to speech samples directly from the PFC site, with simultaneous audio and transcription. One potential drawback to direct access, depending on the input-providing activity envisioned, is that the speech samples are not tailored to individual phonological structures or processes but rather, by design, to a wide variety of them. For example, for our case of word-final clusters, these are found amongst a multitude of non-target items. It may not be a 23

JLLT Volume 8 (2017) Issue 1

beneficial use of time to have students listen to entire extracts for only a few tokens. This is most obvious in the case of the word lists, in which there are only 22 relevant tokens amongst 209 items (the ratio is much worse for the sole word list used outside of Canada, at 3 / 94). More importantly than the ratio of target items to non-target ones, there is not much benefit to listening to a list of words being read, compared to listening to a reading text, or especially to natural conversation about a topic that may well be of inherent interest. So, even if the conversations do not have a high number of target items, important side benefits can be had from listening to them. If using speech samples directly from the PFC site is not ideal for a given purpose, tokens from any or all of the four conditions can also be easily extracted (recorded) from the PFC audio files through freely available software such as Audacity® or Praat (Boersma & Weenkink 2016). Tokens can then be presented to students for the purpose of building lexical representations in a variety of ways. They can be presented with or without reference to their written forms, they can be presented in isolation or in context (of course those tokens from the word list(s) are inherently in isolation), and different surface variants can be presented when and as desired.

5.2 Building Declarative Knowledge Students’ exposure to multiple variants of tokens in context allows them, over time, to ascertain the phonological and stylistic factors conditioning variation. The beauty of the corpus in this regard is that it allows for the development of this declarative knowledge through exploration on the part of the learner, which can be guided to a greater or lesser extent by the instructor. This explorative use of the corpus simultaneously provides input for building and enriching lexical representations (Section 5.1) and for constructing declarative knowledge, the latter relying absolutely on the context in which variants occur. That is, while lexical representations can be developed through exposure to different variants in isolation, learning the rules that produce them depends on hearing them in their conditioning environments, whether these be speech style (on the formal to informal continuum, corresponding to word list(s), reading text, guided and free conversations), or phonological context (pre-vocalic, pre-consonantal, phraseinternal, phrase-final, etc.), and preferably both. Exposure to a range of speech styles is, of course, a built-in feature of the PFC corpus, and exposure to a range of phonological contexts is heavily favoured – by design in the reading text, at least for some phenomena (realization of schwa, liaison) and in relation to natural frequency of occurrence in the six minutes each of guided and free conversations. With respect to word-final consonant clusters, one can begin by exposing students to a speech sample from conversation, in which variants reflecting the pedagogical norm are in preponderance. At first, the focus on form can be passive, couched in a listening comprehension activity with a primary focus on the content of the conversation. Following this can be a more explicit focus on form, with students actively listening for reduced OL clusters. They can first be asked to 24

JLLT Volume 8 (2017) Issue 1

identify relevant words in the written transcript, underlining them, and then to listen for the pronunciations on subsequent replays. Then students and instructor can discuss the variants and their quantitative distribution, and come up with hypotheses to explain these (i.e., possible pronunciation rules). The instructor is obviously free to tailor any such activity as he or she deems appropriate, and the focus on form can be broader – on any reductions involving consonants, or indeed on reductions more generally. This will depend on the type of course and its overall goals, time allotted to pronunciation training versus other aspects of OCC, etc. Remaining on our focus here on OL clusters, one can then move to the reading text and then to the word lists. Before moving to each new condition, students can be asked to make predictions regarding what they will hear, taking into account the nature of the task the speaker is engaged in. After working with the reading text, comparison can be made with the conversation, and hypotheses made earlier can be revisited and adjusted as necessary. For the word lists, playing isolated realizations of the number quatre (4) would be a good way to begin, to reinforce what was found in the naturalistic speech of conversations, and then moving to the relevant words extracted from the word-lists. Overall comparisons can follow the word-list activity, and final pronunciation rules arrived at. Another possible way to approach the students’ discovery process would be to have them explore the online corpus themselves (with appropriate instruction in how to do so, of course). They can be directed to find relevant tokens in each condition, identify variants and quantify their distributions, and come up with hypotheses to explain them. This kind of activity can be done outside of class and, depending on the structure of the course and its constraints (including its size), students can be assigned to work with different target structures and to report back their findings to the class. The corpus provides unique potential for precisely this kind of autonomous learning.

5.3 Developing Automaticity Repetition is clearly a key ingredient for developing automaticity, and the word lists are of obviously utility for this purpose, but single words and / or short collocations can also be extracted from any of the conditions. Beyond simple repetition, practice reading is a perfect next step, and the reading text provides great modeling for this purpose. After listening closely to the text as read by one or more PFC speakers, learners can practice reading it themselves. Learners can do so in pairs or groups, recording theirs and others’ reading of the text, and comparing the recordings to those of the PFC speaker model(s) they have used. For use with PFC conversations, two useful, more advanced activities for developing automaticity are shadowing (Dauer 2004, Grant 2000, Quarterman & Boatwright 2003) and mirroring (Dauer 2004, Monk, Lindgren & Meyers 2003). In a shadowing activity, a learner repeats word for word what a speaker says, 25

JLLT Volume 8 (2017) Issue 1

following by just a word or two. This is a holistic activity, in which learners must pay close attention to imitating as exactly as possible not only the words spoken (including aspects like OL cluster reduction), but every aspect of pronunciation, including rate and rhythm, pauses and hesitations, prominence patterns, and phrasal intonation. With the right equipment, learners can access the input for this activity through headphones and simultaneously record themselves, analyzing the results afterwards in comparison with the model recording. Mirroring is similar to shadowing, but with more explicit attention on linguistic features ahead of time. Learners transcribe around a minute’s worth of speech (or take a transcript of speech available through the PFC site), and carefully annotate intonation contours, prominences, hesitations, and pauses. PFC transcriptions are based on normal orthography and so do not indicate segmental phenomena like assimilation, deletion, or lengthening, any or all of which can also be annotated, as desired. Learners then practice mirroring the speech as precisely as possible, eventually recording themselves and evaluating the product (with or without the assistance of peers). A great extension of these types of activities using the PFC corpus is to have students act as if they themselves are subjects of the study, participating in the guided and free conversation components. Students can interview each other, using questions of the type asked in the guided conversations, and / or they can simply engage in free conversation. Depending on the amount of time that can be allotted to this type of activity, these conversations can be recorded, transcribed and analyzed for the feature(s) treated in class.

6 Exploiting the Corpus for Other Aspects of OCC As mentioned earlier, the PFC-EF project is designed specifically to explore the use of the PFC corpus for pedagogical purposes. These purposes are broad, and they are not all related to teaching pronunciation. In fact, of the nine pedagogical sheets (fiches pédagogiques) available for download (http://www.projetpfc.net/ressources-didactiques/fichespedago.html), very few have activities with an explicit focus on pronunciation per se (i.e., in the procedural sense). Certainly, all provide rich input to learners, and several have activities focusing on building declarative knowledge (for example, of obligatory / optional liaisons, and the realization / deletion of the schwa), but there is little related to basic production (one activity calls for repetition of verb forms in casual versus more formal speech), or on developing automaticity as described in Section 5.3 (one imitation exercise contrasting two regional varieties). Nevertheless, most activities found in these documents are in some way more generally related to the development of OCC. A common stated objective of the sheets is to sensitize learners to phonological variation, including to prejudices that exist with respect to certain geographical 26

JLLT Volume 8 (2017) Issue 1

varieties, or features thereof. Another common objective is to draw learners’ attention to markers of spontaneous oral discourse (and in so doing, to stylistic variation itself). Noticing these markers may happen implicitly (learners hear them, and they see them in the transcripts when provided), but it is sometimes done explicitly as well, either by pointing them out, or by asking students to identify them in a transcript. One of the pedagogical sheets has as an explicit goal to develop awareness of register, in this case as manifested at the level of lexis. All of these types of awareness – of geographic and stylistic variation, and of features of oral as opposed to written discourse – are important aspects of OCC. Probably the most common activity in the pedagogical sheets, and one of obvious relevance to the development of OCC, is the listening comprehension activity. This is often the first activity in a sequence, sometimes preceded by some type of predicting exercise, to activate schemata. The conversations are ideally suited to this purpose, as topics vary considerably, and are often full of cultural information one may not easily find elsewhere. (In our case, the speaker discussed the history of bilingual education in the province of British Columbia.) Other activities in the pedagogical sheets focus explicitly on phonological, grammatical, or even orthographic form (the latter to build declarative knowledge with respect to spelling-to-pronunciation regularities). Possibilities with respect to focus on phonological form are manifold (reduction of OL clusters is one example), and the corpus offers an unparalleled richness in this regard since its raison d’être is phonological analysis. It is perhaps not as obviously the case, then, that the corpus is equally rich when it comes to focus on grammatical form – the guided and free conversations particularly so. The beauty here is that learners see grammar in completely natural contexts and witness how grammatical structures function in real communicative situations. Yet another activity found in the pedagogical sheets is one in which students take content from a segment of conversation and use it to write a text representative of some form of written language. The form of written language students are asked to produce can range from what one might find on a postcard to something far more formal, with expected features of the language changing accordingly, and all being quite distinct from the spoken form. If we use the pedagogical sheets as a model, whether specific pedagogical applications of the corpus focus on advanced listening skills, attending to phonological or grammatical form, or even on the development of writing, they will be tailored to a specific intended audience (defined with respect to competence according to the Common European Framework of Reference for Languages), and will have clear objectives with respect to phonological, grammatical, sociolinguistic and / or discourse features. Objectives will be met through a variety of tasks, using specified PFC material. The content from the guided and free conversations can be used to organize lessons around themes, or the varieties of language to be explored can drive the organization. At all levels, there is much freedom on the part of instructors and, to the extent that they wish to use the pedagogical sheets as models, most of them will have their own expertise to draw on in adapting them. What the PFC corpus brings to the table is a wealth of raw material to draw on. 27

JLLT Volume 8 (2017) Issue 1

7 Benefits of Using Corpus Data in the Development of OCC People may have differing views with respect to the principal benefits of using corpus data for teaching pronunciation. For example, where the instructor sees a tremendous benefit in exposing students to a wide range of varieties of French, and certainly in exposing students in Canada to Canadian varieties, others with more prescriptive proclivities may disagree. We should take seriously the problem identified in Auger (2002), whereby students leaving French immersion programs in Montreal are unable to interact with speakers in the communities in which they live and work, because the pedagogical norm they are exposed to is too distant from the language used in the community, as the author demonstrates. Learners of French in any part of Canada should be equipped (i.e., have the OCC) to interact with French-speaking Canadians. The PFC corpus offers authentic language to familiarize learners with this variety of the language amongst others, and it can be the focus of attention to a lesser or greater extent depending on the specific goals of the instructor, or more importantly, the learners in question. Another huge benefit of working with a corpus is that it makes students aware of language in a general sense. If we look at Tables 1 through 4, corresponding to the four conditions of the PFC project, we see a number of discoverable aspects of language from just this limited data. Already from the data summarized in Table 1, students may sense a frequency effect – i.e., that a very commonly used word, or one that occurs with a high frequency in a given context, like quatre, will be subject to reduction processes to a much higher degree than other words that are less frequent. They may also notice where quatre does not reduce – in its first occurrence in the reading of the first word list – and hypothesize the relevance of repetition on reduction. Another phenomenon for which learners may notice evidence in the word-list condition is lexicalization, as in the form quatre-vingts, where reduction does not occur in Canadian varieties. In all of these instances, an instructor may wish to point these things out to learners rather than relying on them to notice. From the data summarized in Table 2, there is another example of apparent lexicalization, with the form Ministre, which is pronounced [minis] with only one exception. The exception provides another valuable lesson, demonstrating the messiness of naturalistic data. Yet another case of lexicalization is apparent from the data summarized in Table 3, that of Notre-Dame-de-Lourdes. The combined data from Tables 1-4, of course, provide the information about variation across styles of speech, that reduction is more likely in less formal conditions compared to more formal ones like reading from a text, or especially reading words from a list. This type of metalinguistic knowledge is useful to learners as they work towards advanced OCC, and it is also inherently interesting, and so likely to keep them engaged in the learning process. There is unlikely to be disagreement with respect to the potential of the corpus to provide quantitatively and qualitatively rich input, and we have extensively described the benefits this brings to the teaching of pronunciation as well as for the development of other aspects of OCC. The guided and free conversations provide 28

JLLT Volume 8 (2017) Issue 1

a wealth of authentic language, much of which is also of cultural interest. Another indisputable fact is that the corpus data is ideal for raising awareness of different dimensions of sociolinguistic variation, since its design was intended to show precisely this. Another benefit to mention is that the corpus promotes learner autonomy and learning through a discovery process. Learners can explore the corpus on their own, and they can discover interesting aspects of the language (and language in general) on many levels. Learners can, to a greater or lesser extent, be guided in what they do with the corpus by specially designed pedagogical materials, but what they learn is almost certain to extend beyond the specified goals of a given activity. Further, the inherent interest of the corpus is likely to encourage learners to engage with it beyond what might be specifically assigned. Finally, it is worth highlighting the perhaps unintuitive usefulness of the spoken corpus for developing writing skills. This potential can be exploited by explicitly comparing spoken and written language, and by having students convert informational content from conversations into written text.

References Auger, Julie (2002). French Immersion in Montréal: Pedagogical Norm and Functional Competence. In: Gass, Susan M. et al. (Eds.) (2002). Pedagogical Norms for Second and Foreign Language Learning and Teaching. Amsterdam: John Benjamins, 81-101. Boersma, Paul & David Weenink (2016). Praat: Doing Phonetics by Computer [Computer Program]. Version 6.0.15. (http://www.praat.org/; 23-05-2016). Bybee, Joan (2000). The Phonology of the Lexicon: Evidence from Lexical Diffusion. In: Barlow, Michael & Suzanne Kemmer (Eds.) (2000). Usage-based Models of Language. Stanford: CSLI, 65-85. Bybee, Joan (2001). Phonology and Language Use. Cambridge: Cambridge University Press. Chen, Hsueh Chu et al. (2014): The Spoken Corpus of the English of Hong Kong and Mainland Chinese Learners. In: The Hong Kong Institute of Education. (http://corpus.ied.edu.hk/phonetics/; 23-05-2016). Côté, Marie-Hélène (2004). Consonant Cluster Simplification in Québec French: In: Probus 16 (2004) 2, 151-201. Dalton, Christiane & Barbara Seidlhofer (1994). Pronunciation. Oxford: Oxford University Press. Dauer, Rebecca M. (2004). Ways of Using Video: A Report from TESOL’s 2003 Convention: In: SPLIS Newsletter. As We Speak 1 (2004) 1, 9-10. 29

JLLT Volume 8 (2017) Issue 1 Detey, Sylvain et al. (Eds.) (2010). Les variétés du français parlé dans l espace francophone. Ressources pour l enseignement. Paris: Éditions Ophrys. Durand, Jacques, Bernard Laks & Chantal Lyche (2002). La phonologie du français contemporain: usages, variétés et structure. In: Pusch, Claus D., Wolfgang Raible & Johannes Kabatek (Eds.) (2002). Romanistische Korpuslinguistik – Korpora und gesprochene Sprache/Romance Corpus Linguistics - Corpora and Spoken Language. Tübingen: Gunter Narr Verlag, 93-106. Durand, Jacques, Bernard Laks & Chantal Lyche (2009). Le projet PFC: Une source de données primaires structurées. In: Durand, Jacques, Bernard Laks & Chantal Lyche (Eds.) (2009). Phonologie, variation et accents du français. Paris: Hermès, 19-61. Grant, Linda (2000). Well Said: Pronunciation for Clear Communication. Boston: Heinle & Heinle. Gut, Ulrike (2005). Corpus-based Pronunciation Training. Proceedings of the Phonetics Teaching and Language Conference. London: University College London. (https://www.ucl.ac.uk/pals/study/cpd/cpd-courses/ptlc/proceedings_2005). Jones, Randall (1997). Creating and Using a Corpus of Spoken German. In: Wichmann, Anne et al. (Eds.) (1997). Teaching and Language Corpora. Harlow: Addison Wesley Longman, 146-156. Knowles, Gerald (1990). The Use of Spoken and Written Corpora in the Teaching of Language and Linguistics: In: Literary & Linguistic Computing 5 (1990) 1, 45-48. Mauranen, Anna (2004). Spoken Corpus for an Ordinary Learner. In: Sinclair (Ed.). How to Use Corpora in Language Teaching. Amsterdam: John Benjamins, 89-105. Milne, Peter (2016). The Variable Pronunciations of Word-final Consonant Clusters in a Force Aligned Corpus of Spoken French. Paper Presented at the Montréal-OttawaLaval-Toronto Phonology Workshop, Carleton University, Canada. Monk, J., C. Lindgren & M. Meyers (2003). The Mirroring Technique in Prosodic Acquisition. Paper Presented at the 37th Annual TESOL Convention, Baltimore, USA. Morin, Yves-Charles (2000). Le français de référence et les normes de prononciation: In: Le Cahier de l’Institut de linguistique de Louvain 26 (2000) 1, 91-135. Ostiguy, Luc & Claude Tousignant (22008). Le français québécois: normes et usages. Montreal: Guérin. Quarterman, Carolyn & C. Boatwright. (2003). Helping Pronunciation Students Become Independent Learners. Paper Presented at the 37th Annual TESOL Convention, Baltimore, USA. Reed. Marine (2015). Teaching Talk and Tell-backs: The Declarative to Procedural Knowledge Interface. Paper Presented at the Seventh Annual Pronunciation in Second Language Learning and Teaching Conference, Dallas, USA. Santos Pereira, Luisa Alice (2004). How to Use Corpora in Language Teaching. In: Sinclair (Ed.) (2004). How to Use Corpora in Language Teaching. Amsterdam: John Benjamins, 109-122.

30

JLLT Volume 8 (2017) Issue 1 Sinclair, John (2004). How to Use Corpora in Language Teaching. Amsterdam: John Benjamins. Wichmann, Anne (1997). The Use of Annotated Speech Corpora in the Teaching of Prosody. In: Wichmann, Anne et al. (Eds.) (1997). Teaching and Language Corpora. Harlow: Addison Wesley Longman, 211-223. Wichmann, Anne et al. (Eds.) (1997). Teaching and Language Corpora. Harlow: Addison Wesley Longman.

Author: Randall Gess, Ph.D. Professor / Professeur titulaire School of Linguistics and Language Studies Département de français 1618 Dunton Tower Carleton University 1125 Colonel By Drive Ottawa ON K1S 5B6 Canada Email: [email protected]

31

JLLT Volume 8 (2017) Issue 1

JLLT Volume 8 (2017) Issue 1

A Corpus-Based Approach to Distinguishing the Near-Synonyms Listen and Hear Siaw-Fong Chung (Taipei, Taiwan (R.O.C.))

Abstract The present study aimed to compare the verbs listen (to) and hear based on lexical resources and corpora, including (a) WordNet, which contains sense frequency information taken from the Brown Corpus and The Red Badge of Courage; (b) the British National Corpus (BNC); and (c) a writing task for English learners that focused on the uses of listen (to) and hear. The two verbs were compared in terms of sense frequency distributions as well as collocational information. Similarities and differences between the uses of the verbs listen (to) and hear were also analyzed using Sketch Engine, a lexical resource that enables collocational patterns from the BNC to be displayed according to grammatical relations. In the writing task, it was found that, for both verbs, students focused on a certain meaning. In addition, the BNC showed different sense distributions compared with WordNet and the learner data, as well as more figurative meanings. Both learner data and WordNet were predominant in the use of literal meanings for both verbs. This study contributes to the practice of connecting corpus data to teaching and learning. Keywords: near-synonyms, listen, hear, corpus, collocation, learner

1 Introduction It has commonly been found that learners are often confused by the close meanings of near-synonymous words. Taylor (2003), for example, defines synonymy as “a single meaning [which] is symbolized by two or more distinct phonological forms” (Taylor 2003: 246)

indicating that synonyms are words that share a similar meaning. Nevertheless, many (e.g. Lyons 1968, Taylor 2003) have noted that perfect synonyms are very infrequent and that most synonyms are near-synonyms: [P]erfect synonymy is vanishingly rare, methodologically proscribed, or a logical impossibility, what we frequently do encounter are pairs of words that are ‘‘near’’ synonyms” (Taylor 2003: 265) 33

JLLT Volume 8 (2017) Issue 1

In other words, two closely related words are seldom used in exactly the same way. Moreover, near-synonyms usually differ in a subtle way, as Bolinger (1977 noted, “if two ways of saying something differ in their words or their arrangement they will also differ in meaning.’ (Bolinger 1977: 1)

With regard to near-synonyms, some studies have focused on the semantic distinctions between the compared words. With the help of computer-aided technology, studies on near-synonyms can now be carried out, based on abundant computer-generated data. For example, in a corpus-based study on the English verbs start and begin, Biber et al. (1998) discovered differences between these two verbs through their semantic behaviors and complement types. Liu (2010) identified the semantic meanings and pattern differences of five synonymous - the adjectives chief, main, major, primary, and principal - using a corpus. Chief et al. (2000), on the other hand, used corpus data to examine the distributional patterns of the Chinese near-synonyms 方 便 fang1bian4 and 便利 bian4li4 (both mean “to be convenient”). Most studies on near-synonyms, regardless of the part-of-speech they analyzed, have so far arrived at the consensus that near-synonyms can be distinguished in terms of semantic and/or syntactic patterns. It has been stated that the lack of knowledge of the distinctions among near-synonyms could be one reason students use them incorrectly (Partington 1998, Hoey 2000, McEnery & Xiao 2006). In the present paper, the English near-synonym pair listen (to) and hear was examined using different types of corpora.1 The overall purpose of this study was to first conduct a corpus-based analysis of listen (to) and hear and, later, to apply the information gathered from the corpora to a writing task. The research questions are as follows: (a) How can a corpus-based study elicit similarities and differences between the verbs listen (to) and hear? (b) How do the verbs listen (to) and hear behave linguistically when observed in a general corpus and in a writing task? These research questions will be answered based on three sets of data:

1



WordNet 2.1, which contains the sense information of each word taken from two corpora (the Brown Corpus and Stephen Crane’s novella The Red Badge of Courage) (Landes et al. 1998);



a randomly selected sample from a native-speaker corpus (British National Corpus); and

The preposition to is sometimes needed to establish listen as a near-synonym with hear. As such, we can already predict that these two verbs are not perfect synonyms because they differ syntactically. A more in-depth analysis will be provided in later sections of this paper. 34

JLLT Volume 8 (2017) Issue 1



a collection of data based on a writing task.

Among these data, the first two are data from native-speaker corpora. As Ellis and Barkhuizen (2005) pointed out, one way to understand how language learners acquire a second language is to examine their language production. Our study examined the similarities and differences between the verbs listen (to) and hear produced in a guided writing task versus the corpus data. In addition, we also made use of WordNet (Miller et al. 1990, Fellbaum 1998) to examine the senses of the two verbs.2 The comparisons between the lexical resource and the corpora data should achieve the following results: 1. show how the uses of these two verbs may (or may not) differ in different corpora; and 2. provide an opportunity to compare the uses of these two verbs in a task based on which elicitations of these two verbs are intentional. In the following section, the methodology employed will be discussed.

2 Methodology Two main types of data were used, namely, native-speaker data and learner data. The native-speaker data were constituted by the analyses of senses in WordNet and the British National Corpus (BNC), while the learner data were comprised of students’ writings, which were collected through a designed activity. Our methodology can be divided into three main steps: 1. The meanings of the verbs listen and hear as well as their respective sense frequencies were compared based on their senses provided by WordNet 2.1. This comparison of senses also ensured that these two verbs share at least one meaning so as to qualify as near-synonyms. 2. The analyses of the two verbs were then carried out based on 500 random examples taken from the BNC.3 The analyses were based on collocations, verb forms, and the meanings of the two verbs, and each instance was compared to the senses provided by WordNet 2.1. Statistical data on collocations and verb forms were extracted from Sketch Engine (Kilgarriff & 2

3

WordNet is a lexical database that provides the semantic relations (such as synonyms, antonyms, hypernyms, and hyponyms) of lexical items. It is available at http://wordnet.princeton.edu/. The term sense refers to the meaning entry listed in WordNet. Since our analysis of the meanings of listen (to) and hear is based on their senses, the terms sense and meaning are used interchangeably in this article. The British National Corpus is a 100-million-word collection of written and spoken texts. For more information (http://www.natcorp.ox.ac.uk/; 25-01-2017). 35

JLLT Volume 8 (2017) Issue 1

Tugwell 2001), a lexical resource that enables collocational patterns to be displayed according to grammatical relations, such as SUBJECT, OBJECT, and MODIFIER. 3. The data from the BNC were further compared to the elicited data collected from a writing task.

3 Analysis 3.1 Step One: WordNet The search in WordNet 2.1 was conducted so as to investigate the senses of the verbs listen and hear and to further decide whether the two are near-synonyms. The senses and their respective sense frequencies are presented in Table 1 below. As mentioned above, the frequencies shown in Table 1 represent the frequencies provided by WordNet based on the occurrences of the two verbs in two corpora - the Brown Corpus and Stephen Crane’s novella The Red Badge of Courage. The numbers in Table 1 indicate how frequently these senses were found in these two corpora (Table 1). Table 1 shows that the verb listen (to) has three senses, in which the first sense hear with intention - is most frequently used, constituting 61.2% of the total instances.4 The second sense of the verb listen - listen and pay attention (34.7%) - is also found among the senses of the verb hear (Sense 5), indicating that these two verbs have an overlapping meaning. The third sense pay close attention to is least frequently used - with only 4.1% of the total instances. On the other hand, the most frequent sense of the verb hear is perceive sound via the auditory sense with a coverage of 77.5% of the total instances displayed in WordNet. This is followed by the sense get to know or become aware of with 16.9% of the total instances. The other senses - with less than 5% each - are hear evidence by judicial process and receive a communication from someone. However, since the frequency of the sense listen and pay attention (Sense 5) is not available for the verb hear, comparisons between the two verbs cannot be made. This analysis shows that these two verbs are near-synonyms, with only one overlapping meaning. Most of the other meanings are different, which confirms the observations made by Chung and Ahrens (2008) and Taylor (2003). Cruse (1986: 267) also states that near-synonyms usually share some “central semantic traits” but differ in “peripheral traits".5 With the use of quantitative data, we showed what these traits are. The next step of our analysis was based on the BNC.

4

5

The total numbers in Table 1 were computed by the authors, who assumed that the respective frequency for each sense also reflected the total number of tokens found for listen and hear in the two corpora examined. However, as shown in Table 1, the “central” trait may not be the most frequent trait. 36

JLLT Volume 8 (2017) Issue 1

Listen

Hear Frequency

Senses

WordNet 2.1

(in percent)

Frequency Senses

WordNet 2.1

(in percent)

1

Perceive sound via the auditory sense.

275 (77.5%)

Get to know or become aware of, usually accidentally.

60 (16.9%)

Hear with intention. 1

2

(e.g., Listen to the 60 (61.2%) sound of this cello.) Listen and pay attention. (e.g., Listen to your father.)

34 (34.7%)

2

(e.g., I heard that you have been promoted.)

Pay close attention to. 3

(e.g., Listen to the advice of the old man.)

Hear evidence by judicial process. 4 (4.1%)

3

4

(e.g., The jury had heard all the evidence.) Receive a communication from someone. (e.g., We heard nothing from our son for five years.)

12 (3.4%)

8 (2.3%)

Listen and pay attention. 5

Total

98

(e.g., We must hear the expert before we make a decision.) Total

(100.0%)

N/A6

355 (100.0%)

Table 1: The Distributions of Senses for the Verbs Listen and Hear in WordNet 2.1

6

When “N/A” is shown, it is either because this sense never occurred in the corpora or because this sense was added after the sense analysis had been carried out. 37

JLLT Volume 8 (2017) Issue 1

3.2 Step Two: The British National Corpus and the English Sketch Engine The search using the English Sketch Engine resulted in 11,096 instances of listen appearing as verb forms (present tense, past tense, etc.). 7 Hear, by contrast, has 34,609 instances as different verb forms. The corpus data from the BNC were analyzed in terms of collocations and verb forms, as reported in the following.

3.2.1 Collocations of Listen and Hear. Based on corpus data from the BNC, it was found that listen almost never appeared in the transitive form as in *I listen words. In fact, both Dictionary.com and the Merriam-Webster online dictionary define the transitive meaning of listen as archaic: Dictionary.com: verb (used with object). Archaic. to give ear to; hear. (http://www.dictionary.com/browse/listen?s=t;18-07- 2017. Merriam-Webster: archaic: to give ear to: hear. (https://www.merriam-webster. com/dictionary/listen; 1807- 2017)

When searching for the prepositions of the verb listen in the BNC, we found the following results, which are shown in Table 2: Observed Collocate Frequency

T-score Value

to

6026

74.35

for

242

10.11

with

162

7.66

without

18

3.22

outside

10

2.79

at

69

2.15

through

14

1.86

Word

Table 2: Top Prepositions Following Listen in the British National Corpus

The collocation of listen with the top listed preposition to constituted most of the instances (6,026; 54.5%) in the 11,096 instances of listen as a verb.8 This means 7

8

Listen and listen to are both near-synonyms of the verb hear, but in some corpora, the search for listen to is not allowed (such as the search for collocations in Sketch Engine). An asterisk (*) serves as a wildcard to extract any verb forms beginning with listen (to) and hear (including -ing, -ed, etc.). However, when an asterisk is used in an example 38

JLLT Volume 8 (2017) Issue 1

that most of the instances of the verb listen were intransitive. Comparatively, hear appeared in both transitive (I hear voices.) and intransitive forms (I heard from him lately.). Based on these findings, we compared the objects of listen (to) and hear (instead of listen and hear) in the analysis below. Table 3 shows the top 20 collocates under the grammatical relation of OBJECT for listen (to) and hear based on the results from Sketch Engine. Note that some noun phrases were not collected in Sketch Engine, as it provides collocates only at the word level. An example is given in (1) below: (1) (...) but the number of people that don’t listen to what customers are saying to them

In Table 3 below, saliency is a measurement of significance in terms of the collocates used with a certain word in a certain grammatical relation.9 The overlapping objects (shaded in gray) in the top 20 collocates between listen (to) and hear are story, sound, voice, noise, and footstep, which appear in each list.10

Listen to Collocates

9

10

Hear

Frequency

Saliency

music

199

38.13

radio

117

tape

Collocates

Frequency

Saliency

voice

718

43.38

34.99

sound

497

43.31

62

28.99

footstep

137

42.45

report

155

28.14

noise

229

38.4

conversation

54

26.64

scream

82

36.39

story

88

25.8

rumour

94

34.42

sound

55

22.66

news

253

32.53

voice

72

21.92

cry

94

31.62

lecture

24

20.42

shout

50

30.94

word

77

19.65

story

226

28.62

wind

33

19.62

click

35

28.42

noise

27

19.35

word

331

27.77

advice

37

18.74

murmur

30

27.28

sentence, such as *I listen music, this means that the sentence is grammatically incorrect. A question mark appearing before a sentence means that the sentence is unnatural but not ungrammatical. However, saliencies are list-independent and, thus, cannot be compared across lists (such as between listen (to) and hear), but they do indicate the rank of importance among the collocates in the same list. The total number of collocates for the objects of hear does not amount to 34,609, which is the total number of instances for hear in the whole British National Corpus, because some instances of hear were intransitive. 39

JLLT Volume 8 (2017) Issue 1 song

28

18.46

bang

39

26.56

footstep

12

17.78

thud

23

26.28

commentary

12

17.65

bell

63

25.88

speech

29

17.06

whisper

33

25.77

recording

18

16.91

siren

26

24.93

breathing

10

16.24

knock

33

24.89

talk

26

15.8

rustle

17

24.86

Table 3: Top 20 Collocates for the OBJECT Relation of Listen to and Hear

As also found in the sense analysis in Table 1, perceive sound via the auditory sense was the most frequent sense of hear. Table 3 shows that voice, sound, footstep, and noise appear at the top of the list for hear, but they appear at different positions in the listen-to list. Similarly in Table 1, the most frequent sense of listen (to) was to hear with intention, and this is reflected in the top five objects - music, radio, tape, report, and conversation, which is information that listeners perceive with intention - in Table 1 .11 Furthermore, we also found that many of the objects of hear had a negative meaning (scream, rumour, cry, shout, and siren; in bold, and gunshots, explosion, roar and gunfire, which were not among the top 20 collocates), representing events that occur unexpectedly to the agent. When used with the verb listen to, the unintended meaning is lost, i.e., someone listens to gunshots, an explosion, a roar and gunfire on purpose, which is possible but not usual. In fact, as near-synonyms, the verbs listen to and hear are sometimes used interchangeably in limited contexts where their meanings overlap, such as the substitutable meanings of the verbs listen to and hear in Table 3. In examples such as listen to the radio and hear on the radio, the noun radio denotes the event of broadcasting through the radio as a medium, not the physical radio itself. More examples of this are shown in (2) below. In these examples, the object (opinion) and the noun phrase (what people are saying) are underlined: (2) (a) I would love to listen to the opinion of other readers on this subject. (b) I would love to hear the opinion of other readers on this subject. (c) Before reaching your final decision, you must listen to what people are saying. (d) Before reaching your final decision, you must hear what people are saying.

The examples in (2) show the same sense for two verbs, and this shared meaning is also the feature that defines these two verbs as a near-synonymous set. Unlike the sentences in (2) above, not all uses of listen to and hear in (3) below 11

Therefore, it can also be postulated that sense frequency can be obtained by observing the ranking of the collocates, which is arranged according to saliency. Predicting sense frequency is an interesting area of research that can be carried out through collocational analyses. 40

JLLT Volume 8 (2017) Issue 1

are interchangeable. Even though the same object (e.g., story) appears with both verbs, their meanings may not be identical because they can be interpreted differently, as shown in (3). In (3a), listen to stories means listen to narrated stories (with the intention of listening). By contrast, heard stories in example (3b) means receive messages (probably from unknown sources) unintentionally: (3) (a) We used to love to listen to stories about the past of the family. (b) He had heard many stories about Yanto and knew he was a rough handful.

These two sentences clearly show a distinctive feature of these two verbs intendedness in listen (to) is unmarked, while intendedness in hear is marked (Battistella 1990). The noun stories is therefore used in two different senses with listen to and hear, respectively, with (3a) referring to an account of incidents or events and (3b) referring to the metaphorical extension of stories to mean a widely circulated rumor (meanings taken from Merriam Webster Online, 18-072017; https://www.merriam-webster.com/dictionary/story). In addition to the examples discussed above, listen to and hear may denote different senses in some other contexts where they become non-interchangeable. In other words, if the one verb is replaced by the other one, this may result in a change of meaning as shown in (4) below: (4) (a) You can listen to the radio while you’re working. (b)?You can hear the radio while you’re working. (c) He listened to the screams and bangs coming from Beatrice’s cottage. (d) He heard screams and bangs coming from Beatrice’s cottage.

With regards to the examples (4a) and (4b), listen to the radio does not mean the same as hear the radio. These two uses differ in the intendedness of listen to, which is absent in hear (i.e., hear the radio means that someone hears some information by accident or without the intention of listening). Example (4c) is acceptable because the definite article (underlined in the examples) is added before screams and bangs. In this case, listened to has an undertone of "intendedness", meaning that the objects are definite as the listeners know what they are listening to. By contrast, the definite article may or may not be present in (4d) because hear does not have this restriction. In (5) below, it is more natural to say (5a) than (5b) because the adverb attentively contradicts the unintended meaning of heard: (5) (a) Helen listened attentively as Sophie revealed her new plan. (b) *Helen heard attentively as Sophie revealed her new plan.

To further explain the similarities and differences of the two verbs as displayed in example (7), Table 4 provides the collocational data for the grammatical relation of MODIFIER for the verbs listen (to) and hear: 41

JLLT Volume 8 (2017) Issue 1

Listen (to)

Hear

Collocates

Frequency

Saliency

attentively

63

61.8

intently

79

carefully

Collocates

Frequency

Saliency

yesterday

289

51.1

60.8

before

86

39.7

201

56.9

almost

79

27.9

sympathetic ally

12

29.3

distinctly

20

26.3

politely

15

28.7

clearly

53

25.9

hard

29

27.8

today

41

25.1

patiently

13

27.4

all

44

25

closely

20

22.8

once

46

24.6

please

22

22.6

aright

6

24.4

impassively

5

22

scarcely

18

23.4

Table 4: Top 10 Collocates for the MODIFIER Relation of Listen (to) and Hear12

The data in Table 4 show that the verb listen (to) is followed by adverbials such as attentively, intently, carefully, sympathetically, impassively and closely, most of which show that the listener is in control and how the listening process is conducted, depending on the intention of the listener. Comparatively, the verb hear is usually followed by adverbials in terms of time (yesterday, before, today and once) and manner (distinctly, clearly, aright and scarcely). Through Sketch Engine, terms coordinating with listen (to) and hear were also found. Listen (to) was found to often collocate with physical bodily actions such as sit, speak, talk, respond, stand and stay (in boldface), and only one perception verb (look) is shown in Table 5 (though the list is not exhaustive for listen (to)).13 On the contrary, hear only collocates with four verbs only, and two of them are perception verbs (see and smell) while the other two are mental verbs (determine and know):

12

13

The collocates in Tables 3 and 4 can apply to both listen and listen to. Thus, the notation listen (to) is used in the titles and respective columns of both tables. Both respond and stay can also be mental verbs. 42

JLLT Volume 8 (2017) Issue 1

Listen (to) Collocates

Frequency

Saliency

sit

49

29.4

speak

29

talk

Collocates

Frequency

Saliency

stop

17

17.0

24.4

look

29

16.8

24

22.0

wait

12

15.7

learn

19

19.6

stand

12

13.3

respond

12

18.4

stay

8

12.6

Hear Collocates

Frequency

Saliency

Collocates

see

331

40.7

determine

smell

15

24.8

know

Frequency

Saliency

21.0

19.8

9.0

5.6

Table 5: Collocates for the Relation of AND / OR for Listen (to) and Hear

Table 5 shows that listen (to) and hear collocate with certain verb types more often than others. The examples in (6) show some coordinating verbs with listen (to) and hear, respectively: (6) (a) Children learn to listen and speak through participation in projects. (b) ?Children learn to hear and speak through participation in projects. (c) As a very small child, I’d sit and listen while he read me the comic. (d) ?As a very small child, I’d sit and hear while he read me the comic. (e) ?Jason saw and listened nothing during the night. (f) Jason saw and heard nothing during the night. (g) ?Many elderly people can’t see or listen very well. (h) Many elderly people can’t see or hear very well.

In (6), some of the examples are unnatural when listen (to) and hear are substituted for one another This, again, can be related to the unintendedness of hear because in examples such as (6b) and (6d), it is clear that when the agents speak or sit, their actions are conscious and intended. Therefore, the combination of hear and speak and sit and hear, in which an unintended action (hear) is combined with an intended action (speak/sit), is grammatical but semantically awkward. Similarly, when someone saw and heard nothing (6f), this means that the hearing process occurred without the intention of the agent. Nevertheless, the combination of saw and listened in (6e) violates this meaning because it hints at a purposive listening process which has no goal (nothing) and is therefore contradictory. A similar interpretation can be applied to (8g) because both see and hear can be unintentional, but the combination of see or listen represents a conflict of meaning. In the following section, the comparisons of verb forms for listen (to) and hear will be examined. 43

JLLT Volume 8 (2017) Issue 1

3.2.2 Verb Forms of Listen (to) and Hear. Figure 1 shows how listen (to) and hear may also differ in their verb forms and how these verb forms may affect the meanings of these two verbs:

Figure 1: Distributions of Different Verbs Forms for Listen (to) and Hear14

Figure 1 shows that compared to hear, listen (to) appeared more frequently in the -ing form (29.3% vs. 6.4%) and in the finite base form (24.3% vs. 8.7%). Examples of these two verb forms are given in (7) below: (7) (a) The old man and the girl are listening attentively. (b) I kept it up until I was certain you were not hearing a word. (c) Listen to this, listen to this. (d) Now hear this and hear it good.

The progressive form is usually employed for events which happen throughout a certain period of time; thus, the verb hear, which usually denotes an event that happens unexpectedly and unintendedly, is not commonly found in the -ing form. In contrast, hear was encountered more frequently in the infinitive, the past tense and the past participle than listen, as is exemplified in (8): (8) (a) Everyone always wanted to listen to what she had to say. (b) We want to hear your views about any issue affecting the countryside. (c) They stood in the darkness behind the door and listened to the foot steps. (d) I heard someone coming towards the door. 14

The results in Figure 1 only provide the forms of the verbs (e.g., listening, listened). We still do not know how many of these verb forms are used with to (e.g., listening to vs. listening; listened to vs. listened). 44

JLLT Volume 8 (2017) Issue 1 (e) It is important that the advice of experts should be listened to. (f) The voice of the novelist is heard continually in the speech of his characters.

The verb hear (29.1%) was slightly more frequently in the infinitive form than listen (24.6%). The verb hear in the past tense constituted 27.6% as compared to the verb listen in the past tense (15.2%). As a past participle, heard was far more frequent (26.9%) than listened (4.3%). In fact, we found that hear was usually followed by nominals (objects) which were negative in meaning (e.g. (9a) and (9c)). These nominals were often fronted (e.g. (9b) and (9d)) to emphasize what was being heard (underlined): (9) (a) He heard single gunshots from the house. (b) Occasional gunshots can still be heard. (c) It was like a nightmare. I heard a loud explosion just to my left. (d) explosions and mortar bombs were heard intermittently throughout the day.

This is one of the reasons why the verb hear is more frequently used in the passive form than the verb listen (to). Some examples of the verb listen (to) in the past participle form are given below: (10) (a) It [music] can be played, listened to, read and written throughout the world… (b) Children also need to be listened to and their point of view understood.

From the analyses of corpus data based on the British National Corpus through the Sketch Engine, at least six important differences between the verbs listen (to) and hear were identified. 1. The verb listen (to) was marked with intendedness while the verb hear was marked with unintendedness 2. The verb listen to was most frequently used to indicate that someone hears something with intention (e.g., listen to music / a report / the radio), and the verb hear was frequently used to refer to someone who perceives sound through the auditory sense (e.g., hear a voice / sound / noise). 3. Due to the undertone of unintendedness in the verb hear, it often collocated with words that had a negative connotation. 4. Since the objects of the verb hear were often negative, they were often fronted and as a result, more uses of past participle were found for he verb hear than for the verb listen (to). Comparatively, more progressive and finite base forms (especially in the imperative) were found for the verb listen (to) because both these forms allow purposive actions to take place with intention. 5. In terms of similarities, both verbs were used with objects which denote the 45

JLLT Volume 8 (2017) Issue 1

production of physical sounds (e.g., music, song, footsteps, breathing, words, etc.). 6. In addition, both verbs were used with an event as in the phrases listen to the radio and hear on the radio, in which radio denotes an event rather than the device itself.

3.2.3 Step Three: Teaching the Verbs Listen to and Hear 3.2.3.1 Background On the basis of our linguistic analysis of listen (to) and hear, we explained the two verbs to a group of students in an English class. Our purpose was to teach the specific meanings of the two verbs and explain to students how to distinguish them. The explanation took about forty minutes and students were tested in the following week. The following provides the background of the participants. Thirty-nine undergraduate students from a national university in Taiwan participated in this task. Among these students, 12 were males, and 27 were females. Their average age was 20. The students were either freshmen or juniors, and they were paid for participating in this activity. The activity consisted of a writing task and a subsequent vocabulary test. Students were asked to write a 100- to 150-word paragraph based on a series of four pictures adapted from the picture book Frog, Where Are You? (Mayer et al. 1969). This story is about a boy searching for his pet frog, in which the third picture (Figure 2) illustrates the boy putting his hand near his ear to search for the sound of the frog. Based on this illustration, students were expected to use either listen (to) or hear to describe this particular picture, and their language patterns were collected and examined (Figure 2). The task began with a questionnaire about students’ personal information and language background. The main writing task, which included the instructions on the first page, the pictures on the second page, and a blank sheet of paper on the third page, was then distributed to students. The instructions were also explained by the instructor in Mandarin so as to ensure that students were clear about the task. Before the writing task began, students were allowed to ask questions for clarification regarding the task. After making sure that all the instructions were clear to students, the instructor then asked them to turn to the second page. Students were told to focus on the actions of the boy without being informed about the real purpose of the task. They then turned to the third page and started the writing task.

46

JLLT Volume 8 (2017) Issue 1 Picture 1 (Mayer et al. 1969, 1)

Picture 2 (Mayer et al. 1969: 3)

Picture 3 (Mayer et al. 1969: 23)

Picture 4 (Mayer et al. 1969: 27)

Figure 2: Pictures Taken Frog, Where Are You? (Mayer et al. 1969) The whole process took about 50 minutes, and the writing task was completed within 30 minutes. After finishing the writing task, all the students received a vocabulary test, which was adapted from the vocabulary test by Redman (2002). There were ten multiple-choice questions on the test, and each question was accorded a score of ten points. Students were asked to hand in the test after five minutes. The vocabulary test was used as a reference in addition to collecting students’ self-reported English proficiency scores at the beginning of the activity (students had rated their own ability and provided their previous official examination scores or their English scores recognized by the university before their admission).

47

JLLT Volume 8 (2017) Issue 1

3.2.3.2 Results The mean of students’ self-rated Mandarin proficiency was 6.8 on a scale ranging from 1 to 7, with 1 being the least proficient and 7 being the most proficient. The mean of students’ self-rated English proficiency was 4.6 The initial questionnaire also collected students’ self-reported scores on their English Basic Competency Test (BCT) for Junior High School Students and their Joint College Entrance Examination (JCEE). These scores were used as references to determine whether students should be dropped due to low English proficiency. The results showed that no student was dropped due to low English proficiency. For the writing task, the instructor then transcribed all the writing samples into text files. The files were then run, using the AntConc Concordancer (Anthony 2005) to search for [listen*] and [hear*].15 Six out of the 39 students did not use the two verbs, mainly because they skipped the boy’s action in the third picture. They may have thought that this picture carried less important information than the other pictures. Ten students used both verbs listen (to) and hear, while the others used only one of the two verbs. Two examples are given in (11) 16: (11) (a) Listen! I think I heard something. (S006) (b) Therefore, he decided to search for it by listening to the frog’s sounds…. Suddenly, he heard some sounds from the timber on his left side. (S026)

In total, 15 instances (28.8%) of the verb listen (to) and 37 instances (71.2%) of the verb hear were found. Therefore given the picture with the boy putting his hand near his ear, students used more instances of hear than listen (to). Uses of both verbs are documented in (12): (12) (a) The little boy puts his hand near his ear to hope to listen more clearly. (S008) (b) Tom was listening to the sounds the frog produced. (S026) (c) They heard a voice of the frog. (S009) (d) Suddenly, he heard some voices. (S015)

The examples in (12) match those from the above analysis, regarding the unintended meaning of the verb hear, which indicates that students were familiar with these two verbs. The higher percentage of the verb hear was, however, predictable because, as 15

16

See footnote 8 for the use of asterisks. As the asterisk is a wildcard, the search might return irrelevant results such as the form heart. Irrelevant results were removed manually. Noun forms such as listener were also excluded in our results because only the verbs were included for further analysis. Each text was assigned a number (e.g., “S006”) as all texts were saved anonymously. As the texts were written by students, they may contain some language errors.

48

JLLT Volume 8 (2017) Issue 1

shown in the previous analyses, the verb hear is more suitable for contexts expressing unintendedness, and the picture elicited a higher number of unintended uses. Among the students, 28.8% still preferred the intended use of listen (to) compared with hear, as in (12a) and (12b), both of which are purposive actions. Therefore, based on this activity, we found the uses of listen (to) and hear that were preferred by students when prompted by a picture. Most students chose to use hear to increase the suddenness of events in the story. For the verb listen (to), all of the instances (except one imperative) were intransitive. Hear, on the other hand, collocated with only three types of objects in the 37 instances, namely noise (3; 8.10%), sound (22; 59.46%), and voice (12; 32.43%). Among these, only eight instances (21.60%) were used with adjectives (slight, noisy, strange, delightful, and familiar). We had hoped that, at the undergraduate level, students would provide more sophisticated combinations when using these two verbs than they actually did. To compare the results from the three resources, the distributions of the various meanings are presented in Table 6. To summarize the analysis of the writing task, we can say that most of the students were able to distinguish the near-synonym pair listen (to) and hear and use each of them in appropriate contexts when guided. They used the verb listen (to) so as to emphasize that something was paid close attention to and the verb hear to describe someone perceiving a sound, although the addition of adverbs may have made the unintentionality more subtle.

4 Discussion and Conclusion In this paper, corpus data were used to examine the verbs listen (to) and hear in terms of their similarities and differences, as well as their semantic and syntactic distributional patterns. Similar to listen (to), the verb hear can also be used to describe that something is being paid attention to. However, such use was not frequently found in the native-speaker data, as native English speakers generally use the verb hear to refer to the act of perceiving a given sound via the auditory sense. The linguistic analysis that we have carried out in this study has the potential to increase the awareness of English teachers, whose job generally comprises the explanation of different word meanings to students. As Hunston & Feng (2002: 3) noted, teachers tend to teach language based on their own intuition, without providing a better explanation regarding the question of why a certain phrase is more appropriate in a particular context than in another one. Alternatively, by using a corpus, teachers can help learners gain access to a great amount of data, which can be processed and presented by “showing

49

JLLT Volume 8 (2017) Issue 1 WordNet Listen (to)

Frequency Frequency Frequency (in precent) (in precent) (in precent)

WordNet 2.1 Senses 1

2

3

Hear with intention. (e.g., Listen to the sound of this cello.) Listen and pay attention. (e.g., Listen to your father.) Pay close attention to. (e.g., Listen to the advice of the old man.) Others

WordNet 2.1 Senses 1

2

3

4

5

60 (61.2%) 115 (23.0%) 14 (93.3%)

34 (34.7%) 321 (64.2%)

Perceive sound via the auditory sense.

1 (6.7%)

4 (4.08%)

48 (9.6%)

N/A

N/A

16 (3.2%)

N/A

98 (100.0%)

Total

Hear

BNC

500 15 (100.0%) (100.0%)

WordNet

BNC

Student Writing

Freq. (%)

Freq. (%)

Freq. (%)

275 (77.5%) 156 (31.2%) 37(100.0%)

Get to know or become aware of, usually accidentally. 60 (16.9%) 278 (55.6%) (e.g., I heard that you have been promoted.) Hear evidence by judicial process. (e.g., The jury had heard all the evidence.) Receive a communication from someone. (e.g., We heard nothing from our son for five years.) Listen and pay attention. (e.g., We must hear the expert before we make a decision.) Total

N/A

12 (3.4%)

18 (3.6%)

N/A

8 (2.3%)

14 (2.8%)

N/A

N/A

28 (5.6%)

N/A

355 (100%) 500 (100%) 37 (100%)

Table 6: Distribution of senses in data from WordNet, BNC, and the writing task 50

JLLT Volume 8 (2017) Issue 1

frequency, phraseology, and collocation” (Hunston & Feng 2002: 3). Our results can provide students with information about the distribution of lexical items (i.e., which senses of a word - like the verbs listen (to) and hear - are most frequently used or which verb forms are highest in percentage). This information will be valuable to language teachers because it provides the linguistic behaviors of the words compared.

Acknowledgements This study was conducted under research grants from the Ministry of Science and Technology, Taiwan: 104-2420-H-004-034-MY2 and 106-2410-H-004-109-MY2. The author would like to thank the anonymous reviewers and the editor of JLLT for commenting on the previous version of this article. Tzu-Yun Tseng's help on the previous version was also appreciated.

References Ahrens, K., H. Huang & Y. H. Chuang (2003). Sense and Meaning Facets in Verbal Semantics: A MARVS Perspective. In: Language and Linguistics 4 (2003), 468. Anthony, L. (2005). AntConc: Design and Development of a Freeware Corpus Analysis Toolkit for the Technical Writing Classroom. In: Professional Communication Conference, 2005. IPCC 2005. Proceedings. International, IEEE, 729-737. Battistella, E. L. (1990). Markedness: The Evaluative Superstructure of Language. Albany, NY: SUNY Press. Baker, C. F., C. J. Fillmore & J. B. Lowe (1998). The Berkeley Framenet Project. In: Proceedings of the 17th International Conference on Computational Linguistics – Vol. 1. Association for Computational Linguistics, 86-90. Biber, D., S. Conrad & R. Reppen (1998). Corpus Linguistics. Cambridge: Cambridge University Press. Bolinger, D. L. M. (1977). Meaning and form (Vol. 1). London, UK: Longman. Chief, L. C. (2000). What Can Near Synonyms Tell Us? In: Computational Linguistics and Chinese Language Processing 5 (2000) 1, 47-60. Chung, S.-F. & K. Ahrens (2008). MARVS Revisited: Incorporating Sense Distribution and Mutual Information into Near-Synonym Analyses. In: Language and Linguistics: Lexicon, Grammar and Natural Language Processing 9 (2008) 2, 415-434. Cruse, D. A. (1986). Lexical Semantics. Cambridge: Cambridge University Press. Ellis, R. & G. P. Barkhuizen (2005). Analysing Learner Language. Oxford, UK: Oxford University Press. 51

JLLT Volume 8 (2017) Issue 1 Fellbaum, C. (Ed.) (1998). WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press. Hoey, M. (2000). A World Beyond Collocation: New Perspectives on Vocabulary Teaching. In: Lewis, L. (Ed.) (2000). Teaching Collocations: Further Developments in the Lexical Approach. Boston: Heinle, 224-245. Housen, A. (2002). A Corpus-based Study of the L2-acquisition of the English Verb System. In: Computer Learner Corpora, Second Language Acquisition and Foreign Language Teaching 6 (2002), 2002-2077. Hunston, S. & Z. Feng (2002). Corpora in Applied Linguistics. Cambridge: Cambridge University Press. Kennedy, G. (1998). An Introduction to Corpus Linguistics. London: Longman. Kilgarriff, A. & D. Tugwell (2001). Word Sketch: Extraction and Display of Significant Collocations for Lexicography. Proceedings of the Workshop on Collocation: Computational Extraction, Analysis and Exploitation. 39th Annual Meeting of the Association for Computational Linguistics (ACL). Toulouse, France. Landes, S., C. Leacock & R. I. Tengi (1998). Building Semantic Concordances. WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press, 199-216. Liu, D. (2010). Is it a Chief, Main, Major, Primary, or Principal Concern? A Corpus-based Behavioral Profile Study of the Near-synonyms. In: International Journal of Corpus Linguistics 15 (2010) 1, 56-87. Lyons, J. (1968). Introduction to Theoretical Linguistics. London: Cambridge University Press. Mayer, M. et al. (1969). Frog, Where Are You?. New York: Dial Press. McEnery, A. & Z. Xiao (2006). Collocation, Semantic Prosody and Near Synonymy: A Cross-linguistic Perspective. In: Applied Linguistics 27 (2006) 1, 103-199. Miller, G. A. et al. (1990). Introduction to Wordnet: An On-line Lexical Database*. In: International Journal of Lexicography 3 (1990) 4, 235-244. Partington, A. (1998). Patterns and Meanings: Using Corpora for English Language Research and Teaching. Philadelphia, PA: John. Redman, S. (2002). English Vocabulary in Use: Pre-intermediate and Intermediate. London: Cambridge University Press. Taylor, J. R. (2003). Near Synonyms as Co-extensive Categories: ‘High’ and ‘Tall’ Revisited. In: Language Sciences 25 (2003) 3, 263-284.

52

JLLT Volume 8 (2017) Issue 1

Author: Siaw-Fong Chung, Ph.D. Associate Professor Department of English National Chengchi University No. 64, Sec. 2, ZhiNan Road Taipei City 11605, Taiwan

53

JLLT Volume 8 (2017) Issue 1

JLLT Volume 8 (2017) Issue 1

A Frequency Analysis of Vocabulary Words in University Textbooks of French Jennifer Wagner (Clio (MI), USA) Abstract (English) Frequency as a principle for vocabulary selection is now commonly used in the creation of English textbooks; however, it is unclear whether frequency has played a role in the creation of French textbooks. In this study, the vocabulary of twelve first-year and six second-year university textbooks published in the United States was compared to a frequency dictionary of contemporary French. The analysis yielded how many high frequency words were found in the textbooks, in addition to which high frequency words were excluded from the textbooks and which low frequency words were included in the textbooks. The results indicate that the textbooks did not provide enough high frequency words needed for basic communication in French. Keywords: Language materials design, vocabulary acquisition, textbooks, language pedagogy Abstract (français) La fréquence comme principe pour la sélection du vocabulaire est utilisée dans la création des manuels scolaires de langue anglaise. Cependant, il reste à savoir si la fréquence a joué un rôle dans la création des manuels scolaires de langue française. Dans cette étude, le vocabulaire de douze manuels scolaires de première année et de six manuels scolaires de deuxième année publiés aux États-Unis a été comparé à un dictionnaire de fréquence de français contemporain. L’analyse donne le nombre de mots de haute fréquence qui se trouvent dans les manuels scolaires, en plus des mots de haute fréquence qui ne s’y trouvent pas ainsi que les mots de basse fréquence qui s’y trouvent. Les résultats démontrent que les manuels scolaires n’offrent pas assez de mots de haute fréquence nécessaires pour la communication de base en français. Mots-clé: Conception des manuels de langue, acquisition du vocabulaire, manuels scolaires, didactique des langues

1 Introduction 1.1 Frequency in Language Pedagogy Research on vocabulary in language learning and teaching often supports the notion that frequency is a useful guide for vocabulary selection and sequencing in materials design (Nation & Macalister 2010). Schmitt (2008) and Horst (2013) also stress the significance of the frequency status of words in vocabulary learning with an argument for restructuring language pedagogy to include the most frequent 55

JLLT Volume 8 (2017) Issue 1

words at the beginning stages of study. Frequency is not the only principle to be considered for vocabulary selection, but it is a crucial aspect with which to begin the selection process. With the availability of large representative corpora and tools to analyze the texts contained within them, obtaining frequency data on individual words has become increasingly easier and faster. This use of corpora to determine frequency has the additional benefit of being more objective than a native speaker’s or teacher’s intuition, which can be unreliable (Biber et al. 1998, Hunston 2002) and in adequate for determining frequency when used alone (McCrostie 2007). The use of corpora has also facilitated research on the number of frequent words needed to comprehend the majority of a written or spoken text. It has been determined that the most frequent 2,000 words account for 80% of language use (Meara 1995, Nation 2001), and many researchers advocate for the explicit teaching of these high frequency words at the beginning level of language study study (Nation 2001, Nation & Meara 2002, Schmitt 2000). Additionally, Cobb & Horst (2004) concluded that the 2,000 most frequent word families of French offer coverage of about 85% of an average text, while Cobb (2014), more recently, adopted corpus-based tools developed for use in analysing English to the French language and found that the top 2,000 words allow 92% of lexical coverage. These higher percentages for the French language (as compared to those for English) indicate that learning the 2,000 most frequent words is even more useful for learners of French than for learners of English. In environments in which there is minimal exposure to the target language outside of the classroom, as is the case for most students learning French in the United States, the textbook often serves as the sole source of input for the vocabulary that students learn. The textbook also creates the syllabus for the entire language course and determines what is deliberately taught in the classroom (Byrnes 1988, Richards 1993). Therefore, it will be useful to establish how many and which high frequency words actually appear in textbooks, given the necessity of this vocabulary for beginning learners, which first-year or introductory textbooks are intended to encompass.

1.2 Frequency Lists of French Vocabulary In order to determine the concrete number of high frequency words that are included in French textbooks, a frequency list of French words is necessary for the comparison. Using corpora rather than intuition to create frequency lists of English words, has become rather common over the past two decades, yet corpus-based approaches to frequency in other languages, including French, have remained scarce until relatively recently. Although a few textbooks which claim to provide the most useful or essential words in French are currently available (6000+ Essential French Words (2004), Kurgebov (2006), McCoy (2011)), none of the 56

JLLT Volume 8 (2017) Issue 1

authors provide any justification on how the usefulness of these words was determined. In contrast, A Frequency Dictionary of French: Core Vocabulary for Learners (Lonsdale & Le Bras 2009), which features the top 5,000 most frequently used words of French, was created from a corpus of 23 million words of spoken and written French covering various genres and registers. The corpus contains 11.5 million words from interviews, conversations, theatre dialogues, parliamentary debates, and film subtitles in the spoken portion as well as 11.5 million words from newspaper and magazine articles, fiction and non-fiction literature, newsletters, tech reports and user manuals in the written portion. In addition, none of the texts date from before 1950 so as to provide a modern representation of the French language. Although other corpora of French may be larger, such as ARTFL-FRANTEXT (150 million words of prose and poetry from the 17 th to 20th centuries) or EUROPARL (54 million words of the proceedings of the European Parliament), these corpora are restricted to either spoken or written language and consist of fewer genres and registers. The corpus from which A Frequency Dictionary of French was created is the largest corpus to include both spoken and written language in equal amounts across a variety of genres and registers 1. For these reasons, it is a useful tool with which to examine the vocabulary coverage of French textbooks in a more objective manner by comparing word lists.

1.3 Rationale for the Current Study The usefulness of the language provided in French textbooks has been the subject of several studies, but this research has largely focused on certain grammatical features present in the textbooks rather than the overall vocabulary (Etienne & Sax 2009, Fonseca-Greber & Waugh 2003, Herschensohn 1988, O'Connor Di Vito 1991, 1992). While there have been recent studies on high frequency vocabulary in English textbooks (Eldridge & Neufeld 2009, Matsuoka & Hirsh 2010, O’Loughlin 2012), as well as Spanish (Davies & Face 2006, Godev 2009) and German textbooks (Lipinksi 2010), no analysis has been carried out on the vocabulary coverage of French textbooks. Furthermore, no study has examined the combined vocabulary coverage of firstyear textbooks used in conjunction with second-year textbooks in order to determine the amount of vocabulary offered over a two-year course of French at the tertiary level2. Therefore, the present study aims to address this gap by providing an analysis of the vocabulary found in first- and second-year French textbooks designed for university study. 1

2

A larger corpus, the Corpus de référence du français contemporain, will be available in 2018 and it will include 310 million words of French from France, including both spoken and written sources from 1945 to 2014 (Siepmann et al. 2017). Two years, or four semesters, of French language courses are generally required before students may enrol in upper level courses at American universities. 57

JLLT Volume 8 (2017) Issue 1

2 Material and Methods The present study is a quantitative analysis of the vocabulary of university textbooks of French with a comparison to the top 2,000 entries in A Frequency Dictionary of French: Core Vocabulary for Learners (Lonsdale & Le Bras 2009), since high frequency vocabulary consists of 2,000 words (Meara 1995, Nation 2001) and can account for 85 to 92% of lexical coverage of a French text (Cobb 2014, Cobb & Horst 2004). The methodology is similar to that of Davies & Face (2006) and Godev (2009), who used A Frequency Dictionary of Spanish: Core Vocabulary for Learners (Davies 2006) in their comparisons to vocabulary lists of first-year Spanish textbooks. Additionally, the current study includes an analysis of the vocabulary offered by first-year and second-year textbooks used together over a two-year course rather than only individually for each year level. For the purposes of this study, high frequency vocabulary refers to entries 1 to 2,000 of A Frequency Dictionary of French, and low frequency vocabulary refers to words that are not included in the top 2,000 entries of the frequency dictionary.

2.1 Textbook Selection The textbooks selected for this study include twelve first-year and six second-year textbooks published in the United States between 2009 and 2013. The first-year textbooks are introductory ones, designed for the beginning level, and assume no prior knowledge of French. The second-year textbooks analysed are designed for the intermediate level and assume some prior knowledge of French, i.e. the vocabulary and grammar found in first-year textbooks. Second-year textbooks review some of the materials from the first-year textbooks while also providing new materials. These first- and second-year textbooks generally form the curriculum for the first two years of French studies at American universities, with the intention of preparing students for more advanced conversation, composition and literature courses in the third and fourth years of a Bachelor’s degree.

2.2 Vocabulary The active vocabulary of each textbook was included for analysis. According to Davies & Face,“active vocabulary is the vocabulary that students are expected to learn and be able to use, and is generally the vocabulary included in the end of chapter vocabulary lists” (Davies & Face: 2006: 135) as well as the words that are the focus of the vocabulary activities within the respective chapter. Passive vocabulary, on the other hand, most often tends to appear in reading passages, and students are not required to learn these words for productive purposes. The end-of-chapter vocabulary lists were lemmatised according to the same specifications used by Lonsdale & Le Bras in creation of A Frequency Dictionary of 58

JLLT Volume 8 (2017) Issue 1

French. Base forms of each word were found by reducing plural nouns to the singular, verb conjugations to the infinitive, and inflected adjectives to the masculine singular form. Any repetitions in the vocabulary lists for each year level were deleted to find the overall number of types rather than tokens that appear in each textbook. Repetitions between first- and second-year textbooks were not deleted in order to determine how much of the vocabulary found in first-year textbooks was repeated in the second-year textbooks analysed. 2.3 Analysis To determine the number of frequency dictionary entries found in the vocabulary lists of each textbook, each vocabulary list was compared to entries 1 to 2,000 of the frequency dictionary, using the text comparison tool, Text Lex Compare, which is part of Cobb’s Compleat Lexical Tutor website (n.d.). These comparisons yielded an answer to the question of how many high frequency words were included in the first-year textbooks, second-year textbooks, and first-year plus second-year textbooks used together as a two year course of French. Additionally, the comparisons yielded which high frequency words were excluded from the textbooks and which low frequency words were included in the textbooks. High frequency words that were excluded from the textbooks were sorted according to their frequency rank as indicated in the dictionary, for both first- and second-year textbooks. Low frequency words included in the textbooks were sorted according to which words appeared in all twelve of the first-year textbooks and all six of the second-year textbooks.

3 Results 3.1 Number of Total Words and High Frequency Words in Textbooks Table 1 shows the total number of words included in each first- and second-year textbook, as well as the number and percentage of high frequency words included in the textbooks. The total number of words included in the textbooks ranged from 938 to 2,363 for first-year textbooks and 633 to 1,818 for second-year textbooks. On an average, first-year textbooks included 1,486 words, while second-year textbooks included 1,077 words. The number of high frequency words found in first-year textbooks ranged from 515 to 947 with an average of 693 words. For second-year textbooks, the number of high frequency words ranged from 267 to 646 with an average of 426 words. Therefore, only 48% of first-year textbook vocabulary and 41% of second-year textbook vocabulary consist of high frequency words. For both first- and second-year textbooks, the rate of inclusion of high frequency words is less than half of what researchers recommend. This also means that more than half the words included in the textbooks are low frequency words, which Nation & Macalister (2010: 41) argue “should be dealt with only when the high-frequency words have been sufficiently learned”: 59

JLLT Volume 8 (2017) Issue 1 First-Year Textbooks

Words Total

High Freqency Words

High Frequency Words (in %)

Entre Amis

1173

614

52.3%

Voilà

938

553

59.00%

Mais Oui

1352

705

52.10%

Liaisons

1892

756

40.00%

Chez Nous

1690

735

43.50%

Deux Mondes

2363

947

40.10%

Vis-à-Vis

1379

717

52.00%

En Avant

1443

662

45.90%

Français-Monde

1089

515

47.30%

Contacts

1405

733

52.20%

Espaces

1762

832

47.20%

À Vous

1347

555

41.20%

Averages:

1486

693

47.70%

Second-Year Textbooks

Words Total

High Frequency Words

High Frequency Words (in %)

Quant à moi

1818

646

35.50%

Séquences

1204

462

38.40%

Intrigue

904

267

29.50%

Interaction

733

303

41.30%

Bravo

1167

533

45.70%

Personnages

633

342

54.00%

Averages:

1077

426

40,70%

Table 1: Total and High Frequency Vocabulary Coverage in First-Year and SecondYear Textbooks

60

JLLT Volume 8 (2017) Issue 1

The figures in Table 1 indicate that neither first- nor second-year textbooks alone offer 2,000 words (with the exception of Deux Mondes). Therefore it is important to consider the vocabulary coverage of first- and second-year textbooks used together over a two-year course rather than in isolation. Comparing the vocabulary in first-year textbooks to the vocabulary in second-year textbooks reveals how many words were shared between the two levels, or how many words are repeated in the second-year textbooks. However, first- and second-year textbooks are created independently of each other, meaning none of the second-year textbooks is designed as a companion to a specific first-year textbook. Therefore, one first-year textbooks and one second-year textbooks were chosen randomly for the comparison. Table 2 shows these figures for the total number of words shared between two textbooks as well as the high frequency words shared between the textbooks. Second-year textbooks repeated an average of 414 words from the total vocabulary of first-year textbooks, and 229 words from the high frequency vocabulary of first-year textbooks. First- and Second-Year Textbooks Entre Amis + Bravo Mais Oui + Interaction Liaisons + Quant à moi Chez Nous + Intrigue En Avant + Séquences Average:

Total Words Shared 375 256 730 369 343 414

High-Frequency Words Shared 256 156 384 128 222 229

Table 2: Number of Total Words and High-Frequency Words Shared between First-and Second-Year Textbooks

If the average number of shared high frequency words between first- and secondyear textbooks (229) is subtracted from the average number of high frequency words offered by the second-year textbooks (426; see Table 1), the result is 197 high frequency words. This result reveals that second-year textbooks repeat slightly more high frequency words from first-year textbooks than they introduce new high frequency words that were not offered in the first-year textbooks. Adding the number of words offered by each textbook and subtracting the number of shared words between the two levels determines the number of total words as well as high frequency words offered by a two-year course of French. For example, supposing that the first-year textbook Entre Amis and the second-year textbook Bravo are used for a two-year course of French, the total number of unique words offered is 1,965, which was found by subtracting the number of shared words from the sum of the total number of words of each textbook (1,173 + 1,167 – 375):

61

JLLT Volume 8 (2017) Issue 1 First- and Second- Year Textbooks

# Words

# High Frequency

High Frequency Words

Total

Words

(in percent)

Entre Amis + Bravo

1965

891

45.30%

Mais Oui + Interaction

1829

852

46.60%

Liaisons + Quant à moi

2980

1018

34.20%

Chez Nous + Intrigue

2225

874

39.30%

En Avant + Séquences

2305

903

39.20%

Averages:

2261

908

41%

Table 3: Total and High Frequency Vocabulary Coverage over a Two-Year Course of French

As shown in Table 3, the average number of total words offered by a two-year course of French is 2,261 words. This shows that a two-year course of French can indeed provide students with 2,000 words. Nevertheless, the average number of high frequency words offered by a two-year course of French is only 908 words, with about 700 high frequency words being offered in first year and an additional 200 high frequency words being offered in second year. About 200 high frequency words found in the first-year textbooks are also repeated in the second year textbooks. The second-year textbooks neither compensate for the lack of high frequency words in first-year textbooks nor do they provide enough high frequency words missing from the first-year textbooks for students to advance to the next frequency band, as they offer even fewer high frequency words than the first-year textbooks. After two years of study intended to prepare students for the extensive reading required in upper level courses, students are exposed to less than half of the high frequency words. This is at odds with Nation & Macalister (2010) who argue that high frequency words should be given the most attention in language courses before dealing with low frequency words.. The results of the comparisons between textbook vocabulary and frequency dictionary yielded similar results for both year levels. Neither first- or second-year textbooks used individually nor a combination of first-year and second-year textbooks used over a two-year course offer students enough high frequency vocabulary words that are needed for basic communication. These results also reveal that frequency was not used as the guiding principle for vocabulary selection among these textbooks. In order to determine how the vocabulary was selected, the next section will focus on the high frequency words which were excluded from the textbooks as well as the low frequency words which were included in the textbooks. 62

JLLT Volume 8 (2017) Issue 1

3.2 High Frequency Words Excluded from Textbooks A large number of high frequency words were found to be excluded from all of the eighteen textbooks. The number of high frequency words that were excluded from all of the first-year textbooks amounted to 516, and the number of high frequency words excluded from all of the second-year textbooks amounted to 755 words. The top 25 of these high frequency words from each year level are presented in Table 4 for both year levels, with their frequency rank from the dictionary:

First-Year Textbooks Abs. Freq.

Entry

98 142 166 223 246 262 275 315 316 318 319 324 346 365 378 383 388 391 395 404 405

ainsi tel soit situation également mesure quant intérêt mener détail appartenir concerner atteindre présence peuple position effort tirer juger afin peine

406 411

malgré lors

414 416

voix base

Second-Year Textbooks

Glossary

Abs. Freq.

Entry

Glossary

thus such either…or situation also measure as for interest to lead detail to belong to to concern to reach presence people position effort to pull to judge in order to effort, trouble in spite of at the time of voice base

30 31 33 34 38 44 54 73 74 76 79 89 96 116 130 139 145 147 148 160 161

mais nous ou si elle aussi cela notre dont an monsieur pendant depuis penser seulement commencer donc général moment gouvernement eux

but we or if, yes she, her also that, it our whose/which year mister during since to think only to begin therefore general moment government them

162 166

devenir soit

to become either…or

171 175

nom possible

name possible

Table 4: Top 25 High-Frequency Words Excluded from First- and Second-Year Textbooks

63

JLLT Volume 8 (2017) Issue 1

The words in Table 4 indicate that the textbooks tend to exclude high frequency vocabulary that represents abstract concepts, such as situation, intérêt, and peine. Many of these words are also complete or partial cognates with their English translations, which could explain why the authors did not include them. In addition, second-year textbooks exclude some high frequency words that were already included in first-year textbooks, such as subject pronouns (nous, elle) and coordinating conjunctions (mais, ou). In addition, second-year textbooks exclude some high frequency words that the authors assumed were already included in first-year textbooks and did not need to be reviewed in second-year textbooks. This explains why extremely frequent words such as subject pronouns (nous, elle) and coordinating conjunctions (mais, ou) are not found in second-year textbooks, as it is doubtful that a student could pass a second-year French class without already knowing these frequent function words.

3.3 Low Frequency Words Included in Textbooks In addition to the high frequency words excluded from the textbooks, it is important to look at the low frequency words that were included in the textbooks in order to determine how vocabulary was selected by the authors. Low frequency words found among all twelve first-year textbooks included 57 words. These words are presented in alphabetical order with their English translations in Table 5:

French adorer s’amuser bain beurre chaise chaussure chemise cheveu costume se coucher cousin cravate cuisine dent se dépêcher détester

English Glossary to adore to have fun bath butter chair shoe shirt hair suit to go to bed cousin tie kitchen tooth to hurry up to hate

French se laver mademoiselle manteau marron musée neiger nez oncle orange ordinateur pantalon pleuvoir portable poulet se réveiller rose

64

English Glossary to get washed miss coat chestnut, brown museum to snow nose uncle orange computer pants to rain laptop, mobile chicken to wake up rose, pink

JLLT Volume 8 (2017) Issue 1 dîner étage glace gorge grand-mère grand-père ingénieur s’habiller intelligent jambe jambon jean jupe

dinner floor ice, ice cream throat grandmother grandfather engineer to get dressed intelligent leg ham jeans skirt

salut séjour ski soif tante tarte tasse tennis vélo verre vêtement voyager

hi, bye stay ski thirst aunt pie cup tennis bike glass clothing to travel

Table 5: Low Frequency Words Included in All Twelve First-Year Textbooks

The fact that these low frequency words were included in all of the first year textbooks indicates that vocabulary selection was not actually based on frequency. Vocabulary selection was also not based on whether a word has an exact or close cognate with the English translation, as there are many cognates among the high frequency words excluded from the textbooks (as shown in Table 4) and the low frequency words included in the textbooks, such as cousin (cousin), dîner (dinner), jean (jeans), tennis (tennis), and ski (ski). In contrast to the excluded high frequency words which represented abstract concepts, these low frequency words are examples of concrete concepts for which an image or picture can be included in the textbooks to illustrate their meanings. Yet, it was not the quality of being concrete or abstract that determined whether a word was included or excluded from the textbooks. Similar to what Davies & Face (2006: 142) found in their vocabulary analysis of Spanish textbooks, the topics chosen for each chapter largely determined the vocabulary of first-year textbooks. Low frequency words which represent concrete concepts tended to be included in the textbooks because they fit very well into the chapter topics, while high frequency words that represent abstract concepts were excluded because they do not. The low frequency words can be sorted into distinct topics that are present in all of the first year textbooks. The table of contents of En Avant, for example, shows the topics which are quite common to all of the first-year textbooks, such as family, food, travel, clothing, house, and parts of the body. The textbook authors, therefore, provide words which can be categorized into these topics, regardless of the frequency status of these words:

65

JLLT Volume 8 (2017) Issue 1

Chapter

Topic

1

Alphabet, numbers, days and months

2

Personality and appearance

3

Daily activities

4

Family members and pets

5

Food stores and food items

6

Clothing and accessories

7

Entertainment and cultural events

8

Parts of the body

9

One's residence

10

Holidays and other celebrations

11

Life's major milestones

12

City living

13

Vacations and travel

14

A country's history and language(s)

15

France's social and environmental issues

16

The arts Table 6: Table of Contents of the Textbook En Avant

Concerning second-year textbooks, there were two low frequency words found in all six textbooks: gratuit and loyer. In the majority (four out of six) of second-year textbooks, there were 15 low frequency words: animé, ça, canne, chèvre, colon, documentaire, épouvante, fac, intrigue, poire, poivron, salade, sous-titre, western, yaourt. Six of these words refer to food, such as canne and chèvre which were often found in expressions such as canne à sucre and fromage de chèvre. Six of these words refer to films or genres of film, e.g. film d’épouvante. These categories are not surprising, considering the fact that many chapters of the second-year textbooks’ are organised around different regions of the French-speaking world instead of the same chapter topics found in first-year textbooks. The second-year themes tend to focus on the history of French colonisation in these Francophone areas as well as the food that is important or unique to each one of them, such as the sugar cane in Guadeloupe and Martinique. Using films to discover French and Francophone culture is also a common strategy of second-year textbooks, which is not found quite as often as in first-year textbooks. The low frequency words found in second-year textbooks provide further support for the notion that frequency was not used to determine which words to include in 66

JLLT Volume 8 (2017) Issue 1

the textbooks. Once again, the overall theme of the chapters - rather than frequency - determined which words were included.

4 Discussion The frequency analysis showed that 48% of the vocabulary in first-year textbooks consisted of high frequency words, while 41% of the vocabulary in second-year textbooks consisted of high frequency words. However, once the vocabulary between first- and second-year textbooks was compared, it was revealed that roughly half of the high frequency words in second-year textbooks had already been introduced in the first-year textbooks. Therefore, the number of new high frequency words offered by the second-year textbooks is considerably lower than for first-year textbooks. These results show that neither year level, used alone or together over a two-year course, offered enough high frequency words to attain the goal of 2,000 words. The low number of high frequency words in first- and-second year textbooks indicates that textbook authors did not take any frequency status into account when choosing vocabulary words, which Nation & Macalister (2010) stress as the most important criterion for vocabulary selection. Instead, the selection of vocabulary was driven by the respective chapter topics even though several studies indicate that presenting related words within semantic sets actually hinders rather than facilitates vocabulary acquisition. Tinkham (1993) found that semantically related words were more difficult to remember than unrelated words for English speakers. Tinkham’s study was replicated for Japanese speakers by Waring (1997), whose findings indicated “that the related words took significantly more time to learn than did the unrelated words” (Waring 1997: 267). Further studies by Finkbeiner & Nicol (2003) and Erten & Tekin (2008) showed similar results with the use of pictures with and without first-language translations of the words. A more recent study by Bolger & Zapata (2011) took a different approach by presenting words in stories rather than in lists. Even with the addition of context, their subjects “showed more difficulty in rejecting semantically related distractors” (Bolger & Zapata 2011: 633). The results of these studies suggest that presenting “words in semantic sets creates competition between items, which in turn increases difficulty during learning and during memory retrieval in language production” (Finkbeiner & Nicol 2003: 379). According to Nation (2000), the most frequent words within a semantic set should be presented first, and once those words have been learned by students, the rest of the words can be introduced. Moreover, he maintains that “even if frequency is used only as a very rough guide to the sequencing of vocabulary in a course, it would lead to the separation of many members” (Nation 2000: 8) of those semantic sets. Although using a given topic to guide vocabulary selection is not recommended, textbook authors do so because it seems logical to offer the complete lexical coverage of a certain topic at once (Nation 2000) and also because it is easier to 67

JLLT Volume 8 (2017) Issue 1

design materials in this manner (Folse 2004). Walz takes a more extreme approach and insists that authors include certain words “simply because they exist and not because of any usefulness or frequency” (Walz 1986: 17). The goal of vocabulary selection in the textbooks analysed appears to be oriented towards the inclusion of as many words as possible so as to complete the respective chapter topic, rather than taking the most frequent, and therefore most useful, French words into account – which does not match the goals of the textbooks as stated in the introductions. Many of the textbooks indicate that they offer a “communicative approach” or an “emphasis on communicative interactions” with a “functional / task-based syllabus.” The chapter topics integrate the functions, structures and vocabulary that seem directly related to each other, such as ordering at a café or restaurant, the partitive article, and types of food and drink (Chez Nous 2009: 305; FrançaisMonde 2010: 236) or describing daily routines, pronominal verbs, and parts of the body (Espaces 2011: 326; Vis-à-Vis 2011: 352). An example from the textbook Entre Amis illustrates the use of functional phrases. In a chapter based on clothing, the question Qu’est-ce que vous portez en cours ? and the response Je porte... are introduced with several examples of types of clothing that are to be placed into the response. Therefore, the vocabulary was selected according to the overall topic of the chapter in addition to how well the words “fill in the slots” of the functional phrases (Folse 2004: 24). Rather than high frequency vocabulary being the focus of the chapter, it was a combination of the thematic orientation of the textbook chapter and the communicative or functional phrases that actually determined the organization of the textbook. Two textbooks explicitly addressed the issue of frequency by claiming that “students are exposed to high-frequency expressions” (En Avant 2012: xviii) and “high-frequency vocabulary is introduced” (Espaces 2011: xiii) though no explanation for how these were determined to be high frequency was given. Several textbooks claim that the language provided within the different chapters reflects the way French is actually used today (Voilà 2010: xii, Vis-à-Vis 2011: xvii) and that students will be exposed to “real-world language” (Mais Oui 2013: xvii), “authentic language” (Chez Nous 2009: xi, Deux Mondes 2009: xiii, FrançaisMonde 2010: xiv), and “natural language use” (Français-Monde 2010: xv). If the textbooks are indeed offering authentic French, it would be expected that they would include the words that students are most likely to encounter in the real world. However, the results of this study suggest that frequency was not the main factor in vocabulary selection. The fact that similar results were found for all of the textbooks also attests to the intuitive nature of materials design (Tomlinson 2013) as well as the publishing companies’ encouragement to produce textbooks which are very similar to textbooks already available on the market (Heilenman 1993). Furthermore, the results of this study are similar to what Davies & Face (2006), Godev (2009), and Lipinski (2010) found in their comparisons of Spanish and German textbook vocabulary with frequency dictionaries. Overall, the rationale for vocabulary selection among 68

JLLT Volume 8 (2017) Issue 1

language textbooks appears to be classroom communication that is based on topics rather than real world communication that is based on frequency.

5 Conclusions The present vocabulary analysis investigated the extent to which textbooks excluded the most frequently used words of French, and also identified which low frequency words were included in the textbooks. As illustrated by the results, neither first- nor second-year textbooks used alone or combined into a two year course offered an adequate coverage of high frequency words. It was found that the textbook vocabulary focused on words belonging to the chapter topic instead of those that occur most frequently in the French language, which does not respond to current research on the teaching of vocabulary. More precisely, the demands of textbook design determined the choice of vocabulary which focuses on topic and functional phrases rather than frequency. Consequently, students will lose time learning low frequency words rather than being guided to the high frequency words that they are more likely to encounter in the real world, in order to have the vocabulary to do the pedagogical tasks in the textbook. This finding has several implications for materials designers who should strive to use corpora instead of intuition when selecting vocabulary words in order to meet the goals of the textbooks as expressed in the introductions. The results of this study also point to the need for teachers and students to supplement current textbooks with more high frequency vocabulary as determined by a representative corpus of French.

References Primary Sources First-Year Textbooks Amon, E., J. A. Muyskens & A. C. Omaggio-Hadley ( 52011). Vis-à-vis: Beginning French. New York: McGraw-Hill. Anderson, B., P. Golato & S. A. Blatty (2012). En avant: Beginning French. New York: McGraw-Hill. Anover, V. & T. A. Antes (22012). À Vous!: The Global French Experience. Boston: Cengage-Heinle. Ariew, R., & B. Dupuy (2010). Français-Monde: Connectez-vous à la francophonie. Upper Saddle River, NJ: Pearson Education.

69

JLLT Volume 8 (2017) Issue 1 Heilenman, K., I. Kaplan & C. Toussaint Tournier. (62010). Voilà! An Introduction to French. Boston: Thomson-Heinle. Mitschke, C. & S. Tano (22011). Espaces: Rendez-vous avec le monde francophone. Boston: Vista Higher Learning. Oates, M. & L. Oukada ( 62013). Entre Amis: An Interactive Approach. Boston: Houghton Mifflin. Terrell, T. D. et al. (62009). Deux mondes: A communicative approach. New York: McGraw-Hill. Thompson, C. P. & E. M. Phillips (52013). Mais Oui!: Introductory French And Francophone Culture. Boston: Houghton Mifflin. Valdman, A., C. Pons & M. E. Scullen ( 42009). Chez Nous: Branché Sur Le Monde Francophone. Upper Saddle River, NJ: Prentice Hall. Valette, J.-P. & R. M. Valette (82009). Contacts: Langue et culture françaises Boston: Houghton Mifflin. Wong, W. et al. (2013). Liaisons: An Introduction to French. Boston: Cengage-Heinle.

Second-Year Textbooks Bissière, M. (22013). Séquences: Intermediate French through Film. Boston: CengageHeinle. Blood, E., & Y. Mobarek ( 32011). Intrigue: langue, culture et mystère dans le monde francophone. Upper Saddle River, NJ: Pearson Education/Prentice Hall Bragger, J. & D. Rice (52012). Quant à moi. Boston: Heinle & Heinle. Muyskens, J. et al. (72012). Bravo!. Boston: Thomson-Heinle. Oates, M. & J. Dubois (42010). Personnages: An Intermediate Course in French Language and Francophone Culture. Hoboken, NJ: John Wiley & Sons St. Onge, S. & R. St. Onge ( 82012). Interaction: Langue et culture. Boston: ThomsonHeinle.

Secondary Sources 6000+ Essential French Words (2004). New York: Living Language. Biber, D., S. Conrad & R. Reppen (1998). Corpus Linguistics: Investigating Language Structure and Use. Cambridge: Cambridge University Press. Bolger, P. & G. Zapata (2011). Semantic Categories and Context in L2 Vocabulary Learning. In: Language Learning 61 (2011) 2, 614-646. Byrnes, H. (1988). Whither Foreign Language Pedagogy: Reflections in Textbooks, Reflections on Textbooks. In: Die Unterrichtspraxis/Teaching German 21 (1998) 1, 2936. 70

JLLT Volume 8 (2017) Issue 1 Cobb, T. (n.d.). Text Lex Compare v.3 [Computer Program]. (http://www.lextutor.ca/cgibin/tl_compare/; 10-01-2017). Cobb, T. (2014). Adopting Corpus-based Tools from English to other Languages. Paper Presented at the AILA World Congress, Brisbane, Australia. Cobb, T., & M. Horst (2004). Is there Room for an Academic Word List in French? In: Bogaards, P. & B. Laufer (Eds.) (2004). Vocabulary in a Second Language: Selection, Acquisition, and Testing. Amsterdam: John Benjamins, 15-38. Davies, M. (2006). A Frequency Dictionary of Spanish: Core Vocabulary for Learners. New York: Routledge. Davies, M., & T. L. Face (2006). Vocabulary Coverage in Spanish Textbooks: How Representative Is It? In: Sagarra, N. & A. J. Toribio (Eds.) (2006). Selected Proceedings of the 9th Hispanic Linguistics Symposium. Somerville, MA: Cascadilla Proceedings Project. Eldridge, J., & S. Neufeld (2009). The Graded Reader Is Dead, Long Live the Electronic Reader. In: The Reading Matrix 9 (2009) 2, 224-244. Erten, I. H., & M. Tekin (2008). Effects on Vocabulary Acquisition of Presenting New Words In Semantic Sets Versus Semantically Unrelated Sets. In: System 36 (2008), 407-422. Etienne, C., & K. Sax (2009). Stylistic Variation in French: Bridging the Gap Between Research and Textbooks. In: Modern Language Journal 93 (2009) 4, 584-606. Finkbeiner, M., & J. Nicol (2003). Semantic Category Effects in Second Language Word Learning. In: Applied Psycholinguistics 24 (2003) 3, 369-383. Folse, K. S. (2004). Vocabulary Myths: Applying Second Language Research to Classroom Teaching. Ann Arbor: The University of Michigan Press. Fonseca-Greber, B., & L. Waugh (2003). On the Radical Difference between the Subject Personal Pronouns in Written and Spoken European French. In: Language and Computers 46 (2003) 1, 225-240. Godev, C. B. (2009). Word-frequency and Vocabulary Acquisition: An Analysis of Elementary Spanish College Textbooks in the USA. In: Revista de linguística teórica y aplicada, 47 (2009) 2, 51-68. Heilenman, L. K. (1993). Of Cultures and Compromises: Publishers, Textbooks, and the Academy. In: Publishing Research Quarterly 9 (1993) 2, 55-67. Herschensohn, J. (1988). Linguistic Accuracy of Textbook Grammar. In: The Modern Language Journal 72 (1998) 4, 409-414. Horst, M. (2013). Mainstreaming Second Language Vocabulary Acquisition. In: The Canadian Journal of Applied Linguistics 16 (2013) 1, 171-188. Hunston, S. (2002). Corpora in Applied Linguistics. Cambridge: Cambridge UP. Kurgebov, E. (2006). Must-Know French: Essential Words For A Successful Vocabulary. New York: McGraw-Hill. 71

JLLT Volume 8 (2017) Issue 1 Lipinski, S. (2010). A Frequency Analysis of Vocabulary in Three First-Year Textbooks of German. In: Die Unterrichtspraxis/Teaching German 43 (2010) 2, 167-174. Lonsdale, D. & Y. Le Bras (2009). A Frequency Dictionary of French: Core Vocabulary for Learners. New York: Routledge. Matsuoka, W. & D. Hirsh (2010). Vocabulary Learning through Reading: Does an ELT Course Book Provide Good Opportunities? In: Reading in a Foreign Language 22 (2010) 1, 56-70. McCoy, H. (2011). 2,001 Most Useful French Words. Mineola, NY: Dover Publications. McCrostie, J. (2007). Investigating the Accuracy of Teachers' Word Frequency Intuitions. In: RELC Journal 38 (2007) 1, 53-66. Meara, P. (1995). The Importance of an Early Emphasis on L2 Vocabulary. In: The Language Teacher 19 (1995) 2, 8-10. Nation, P. (2000). Learning Vocabulary in Lexical Sets: Dangers and Guidelines. In: TESOL Journal 9 (2000) 2, 6-10. Nation, P. (2001). Learning Vocabulary in another Language. Cambridge: Cambridge University Press. Nation, P. & J. Macalister (2010). Language Curriculum Design. New York: Routledge. Nation, P. & P. Meara (2002). Vocabulary. In: Schmitt, N. (Ed.) (2002). An Introduction to Applied Linguistics. London: Arnold, 35-54. O'Connor Di Vito, N. (1991). Incorporating Native Speaker Norms in Second Language Materials. In: Applied Linguistics 12 (1991) 4, 383-396. O'Connor Di Vito, N. (1992). "Present" Concerns about French Language Teaching. In: The Modern Language Journal 76 (1992) 1, 50-57. O’Loughlin, R. (2012). Tuning in to Vocabulary Frequency in Coursebooks. In: RELC Journal 43 (2012) 2, 255-269. Richards, J. C. (1993). Beyond the Text Book: The Role of Commercial Materials in Language Teaching. In: RELC Journal 24 (1993) 1, 1-14. Schmitt, N. (2000). Vocabulary in Language Teaching. Cambridge: Cambridge University Press. Schmitt, N. (2008). Instructed Second Language Vocabulary Learning. In: Language Teaching Research 12 (2008) 3, 329-363. Siepmann, D., C. Bürgel & S. Diwersy (2017). The Corpus de référence du français contemporain (CRFC) as the First Genre-Diverse Mega-Corpus of French. In: International Journal of Lexicography 30 (2017) 1, 63-84. Tinkham, T. (1993). The Effects of Semantic Clustering on the Learning of Second Language Vocabulary. In: System 21 (1993) 3, 371-380.

72

JLLT Volume 8 (2017) Issue 1 Tomlinson, B. (Ed.) (2013). Applied Linguistics and Materials Development. London: Bloomsbury. Walz, J. (1986). Is Oral Proficiency Possible with Today's French Textbooks? In: The Modern Language Journal 70 (1986) 1, 13-20. Waring, R. (1997). The Negative Effects of Learning Words in Semantic Sets. In: System 25 (1997) 2, 261-274.

Author: Dr. Jennifer Wagner PhD in Applied Linguistics (University of South Australia) Private address: 1301 East Lake Road Clio MI 48420 USA Email: [email protected]

73

JLLT Volume 8 (2017) Issue 1

JLLT Volume 8 (2017) Issue 1

Transforming Can-do Frameworks in the L2 Classroom for Communication and Feedback 1 Norman Fewell & George MacLean (both Okinawa, Japan)

Abstract Can-do statements have become increasingly popular for language teachers in recent years, providing a descriptive list of communicative tasks that may pinpoint areas needing attention. Students may also benefit with such can-do lists, as they are often asked to check off statements of what they believe they can or can’t do. This allows them to reflect on their individual needs in language learning. The idea seems simple enough. Nevertheless, claiming an ability to achieve a task is one thing and actually being able to accomplish the task is another. In an attempt to create a more practical way of utilizing can do statements for an EFL communication class, we have essentially flipped the framework and reframed the NCSSFL-ACTFL Can-Do Statements into a “show me you can do” list of commands. The list of can-do statements was modified into communicative group activities. A description of the effectiveness of these activities along with aspects of self and peer feedback will be discussed. Keywords: Can-do benchmark statements, communication activities, feedback

1 Introduction The NCSSFL-ACTFL Can-Do Statements (2013) were created through a collaborative effort between two educational organizations, the National Council of State Supervisors (NCSSLF) and the American Council on the Teaching of Foreign Languages (ACTFL). The purpose of this and other can-do lists are to establish a framework of reference for successive levels of language acquisition. In addition, can-do statements may provide a reference point for curriculum designers and course planners. The can-do descriptors attempt to cover the basis of all language skill sets, including: interpersonal communication, presentational speaking, presentational writing, interpretive listening, and interpretive reading. A total of eleven proficiency levels have been classified in the NCSSFL-ACTFL CanDo Statements with a range from novice-low to the distinguished level. In 1

This research was supported by Grants in Aid for Scientific Research from the Japan Society for the Promotion of Science, under project 26370665 "Immediate Feedback and the Use of Polling Systems for EFL Instruction". 75

JLLT Volume 8 (2017) Issue 1

essence, statements indicating what one can or cannot do in a communicative task provide an indication of an individual’s level within the framework. The statements have brought needed structure and standardization to an area of language teaching that has always lacked clarity. It is now possible, for instance, to classify a student as having upper-novice interpretive reading skills based on their ability to perform certain predefined tasks. Likewise, estimations of the level of educational materials are more accurately established with the addition of standardized ratings. Teachers can now more easily find materials that closely correspond to the proficiency levels of their students. A detailed list of can-do descriptors may also allow students and teachers to more easily see if there are any communicative areas that may need additional attention from them. Furthermore, students may attempt to evaluate their own range of communicative abilities from the can-do descriptor list. Self-assessment of this nature is commonplace in language classes and typically undertaken as a means of determining the needs and proficiency levels of students. Nevertheless, the reliability of second language (L2) self-assessment has been questioned by some scholars (e.g., Janulevičienė & Kavaliauskienė 2010 and Todd 2002). In response, one may take into consideration the idea of teacher-rated assessments for students. However, the enormity of items on the NCSSFL-ACTFL Can-Do Statements would make it impractical for most classroom situations. In practice, teachers would likely be unable to check students on an individual basis. We faced such a dilemma. There was a need to assess students’ proficiency levels, and the most practical option required utilization of student self and peer assessment to some degree. It should be mentioned that several notable studies examining the reliability and validity of student self-assessment in the L2 classroom have found positive results when guidance was provided to students (Brown 2005, Coombe 2002). In addition, other studies have found that peer assessment has yielded equally positive outcomes that were either equal to or exceeding teacher assessment in accuracy (e.g., Topping 1998). Accordingly, all participants were provided with assessment training prior to initiation in the can-do communicative project.

2 Literature Review Conversation-based university EFL classes are popular among students and large class enrollment numbers often reflects this fact. As such, it is often the case that the option of teacher-rated assessments is impractical due to the sheer number of students. EFL conversation-based classes affiliated with this study were in such a predicament. The can-do descriptors of the NCSSFL-ACTFL were modified into conversational tasks rather than relying on the original construct of can-do tasks with a preset of check-off boxes. For instance, among the can-do descriptors listed in Figure 1 of the NCSSFL-ACTFL Can-Do Statements (2013), one item states, “I can tell someone how to access information online” (p. 8). The typical procedure in using the NCSSFL-ACTFL Can-Do Statements would require a student to first guess if he or she could achieve the communicative task. If the task was considered achievable, the student would simply check off the box. An alternative to this routine was created in the form of a communication activity. Based on the example above from Figure 1, the can-do descriptor was modified 76

JLLT Volume 8 (2017) Issue 1

into the following communicative task, “Explain to someone who has never used a computer about how to find information online.” Once a student would attempt to actually complete the communicative task, it would leave less doubt as to whether the task would be rated as being achievable or not:

Figure 1: Sample of Specific Language Tasks from the NCSSFL-ACTFL Can-Do Statements (American Council for the Teaching of Foreign Languages 2013: 8)

The NCSSFL-ACTFL Can-Do Statements provide a formidable list of various communicative tasks. It is unclear if students have ever attempted to converse in all of the situations listed in the can-do descriptors. In any given statement, it may be quite challenging to determine if one can or cannot successfully achieve a communicative task - especially if it has never been attempted before. As a result, the authors felt that there was a need to reduce this ambiguity and modify the descriptors in such a way that students could attempt to communicate in each of the tasks before trying to assess their own abilities. In addition to these concerns, the sheer extent of the NCSSFL-ACTFL Can-Do Statements provides teachers with a generous amount of content and material for communication instruction. In short, these aspects have led to the creation of the modified can-do communicative tasks. We have adapted the descriptors at each level of the NCSSFLACTFL Can-Do Statements into tasks that were to be completed by individual students during group communication activities. Students would continue to engage in the communicative tasks at each successive level of the NCSSFL-ACTFL CanDo Statements for the next two months. A step-by-step description of the procedure for conducting the can-do communicative task activity consists of the following: Step 1: Formation of groups of three Step 2: Distribution of evaluation sheets (Figure 2) Step 3: Announcement of preparation and task time by the instructor. Step 4: Listing of member names on the evaluation sheets by students

77

JLLT Volume 8 (2017) Issue 1

Step 5: (Oral) presentation of a written description of the communicative task by the instructor Step 6: Announcement of start / stop time of each task by the instructor Step 7: (After task completion) Rating of the speaker’s performance on the evaluation sheet by students Step 8: Repetition of steps 5-7 in sequential order until all members in the group have completed the task The conventional procedure of having students directly assess items from the can-do checklist without first attempting them was omitted for the following reasons:



The checklist could be easily modified into communication activities to directly determine if tasks were attainable or not.



Aspects of self and peer feedback could be monitored in a quasiexperimental classroom setting.

The resulting checklist had four task components, two of which were scored in increments ranging from 100, 90, 80, 70, to 60; and another two at 100, 80, 60, 40, to 20, as can be seen in Figure 2:

Can-Do Peer / Self-Assessment Task Components

Scoring

Amount of English Spoken

100% 90% 80% 70% 60%

Quality of Speaking

100% 90% 80% 70% 60%

Amount of Speaking

100% 80% 60% 40% 20%

Completion of Task

100% 80% 60% 40% 20%

Figure 2: Sample of the Evaluation Categories and Scoring for the Can-Do Communicative Tasks

Since students were enrolled in conversation-based language classes, the NCSSFL-ACTFL Can-Do Statements used for this study were limited to those in the category of interpersonal communication (Figure 2). The communication activities were carefully constructed to exhibit simplicity and maintain structure. Students would partake in these group activities in each successive class for a period of two-months:

78

JLLT Volume 8 (2017) Issue 1

Novice Low I can communicate on some very familiar topics, using single words and phrases that I have practiced and memorized.

Novice Mid

Novice High

Intermediate Low

I can communicate on very familiar topics, using a variety of words and phrases that I have practiced and memorized.

I can communicate and exchange information about familiar topics, using phrases and simple sentences, sometimes supported by memorized language. I can usually handle short social interactions in everyday situations by asking and answering simple questions.

I can participate in conversations on a number of familiar topics, using simple sentences. I can handle short social interactions in everyday situations by asking and answering simple questions.

Intermediate Mid I can participate in conversations on familiar topics, using sentences and series of sentences. I can handle short social interactions in everyday situations by asking and answering a variety of questions. I can usually say what I want to say about myself and my everyday life.

Figure 3: Sample of NCSSFL-ACTFL Interpersonal Communication Can-Do Benchmarks (American Council for the Teaching of Foreign Languages 2013: 4).

The activities required students to work together in groups of three with a turnbased rotation and designate a main speaker to complete a communicative task. After the task was completed, students would rate their own performance on the task alongside peer-ratings from the other two students in the group. This dualassessment system provided an additional layer of score validation. Peer ratings have been found to be reliable and consistent when compared to teacher ratings (Saito & Fujita 2004). The inclusion of peer ratings in the feedback rubric provided additional insight into a subject matter rarely studied in EFL (Saito 2000). Despite variability in communicative proficiency among the students who participated in the activity, they were instructed to begin at the novice-low level of the NCSSFL-ACTFL Can-Do Statements. At each level, several different communicative tasks were undertaken by each of the students. Since there were multiple tasks at each level, students did not have to contend with identical tasks. Although presenting a more challenging activity, this measure provided students with a closer degree of equality for mutually assessing each other’s performance. A preset time for preparation was given to all students along with a set time 79

JLLT Volume 8 (2017) Issue 1

designated for completing each communicative task. The preparation time was gradually shortened along each successive level, and the time to complete each communicative task was extended as well. Adjustments were made to condition students to gradually develop the skills necessary for spontaneous speech. In every class, each student would typically complete three tasks, and the total time allotted per task for this activity consisted of fifteen minutes on average for the group as a whole. Immediately following each task, peer / self-evaluation forms were completed by students in each group. Ratings were given by each student to measure aspects of speaking performance.

3 Method 3.1 Research Questions In this study, the following research questions were examined:



Did the activity promote communication in the target language (TL)?



How was the activity perceived by participants?



What impact did the feedback format have in terms of the activity and student perspectives?

3.2 Data Collection and Participants The present study investigated in-group communication activities that were modified from a list of communicative tasks in the NCSSFL-ACTFL Can-Do Statements. Communication activities were performed by EFL learners at successive levels of difficulty. The total number of participants in the study consisted of 98 students from two public Japanese universities: Meio University and the University of the Ryukyus. Instructors at both institutions followed identical procedures in directing students in groups to perform each communicative task followed by an evaluation of peer and self-performance on the tasks. Students assessed the following:

• • • •

amount of English; amount of speaking; quality of speaking, and completion of task.

Peer and self-assessment scores were input immediately after each activity using Google Forms and were based on a percentage scale. Scoring for two task components, Amount of English Spoken and Quality of Speaking was rated according to increments ranging from 100, 90, 80, 70, to 60; whereas the more easily quantifiable tasks of Amount of Speaking and Completion of Task were 80

JLLT Volume 8 (2017) Issue 1

rated according to increments ranging from 100, 80, 60, 40, to 20 (Figure 2).

4 Findings 4.1 Self and Peer Ratings Items being assessed on the rating scale consisted of the following: the amount of English used as opposed to Japanese, the quantity of speaking versus silence, the quality of speaking, and the degree of task completion. Students were asked to rate their own performance on tasks for each category and to rate the performance of others in the group as well. This provided a means to not merely measure potential differences between self and peer ratings, but to also promote critical self-awareness in areas of speech delivery. For instance, the scale of 'English use and Japanese use' was explicitly included to encourage students to maintain an English-speaking environment during the course of group communicative activities. Reluctance to communicate in the L2, even in instances involving simple communication, are a noted obstacle in conversational tasks in Japanese university EFL classes (Eguchi & Eguchi 2006). The inclusion of the English use and Japanese use item was an attempt to encourage students to strive towards using the L2 rather than the L1. After providing students with a thorough explanation of the procedures and goals of the can-do activity, it was encouraging to find that they were quick to adjust. Compliance in attempting to maintain an ‘English-speaking’ atmosphere was immediate (Table 1). Likewise, at the beginning stages each of the task components measured in the self / peer assessment forms were completed with a high degree of success. This may be due to the level of simplicity at the novice-low level. The inclusion of the most basic levels of the can-do list was based on the premise of raising levels of self-confidence for students to communicate in the target language and to ensure they gained a formidable grasp of the routine of the activity:

Table 1: Amount of English Used at Each Level: Peer & Self-Assessment

After providing students with a thorough explanation of the procedures and goals of the can-do activity, it was encouraging to find that they were quick to adjust. Compliance in attempting to maintain an ‘English-speaking’ atmosphere was 81

JLLT Volume 8 (2017) Issue 1

immediate (Table 1). Likewise, at the beginning stages each of the task components measured in the self / peer assessment forms were completed with a high degree of success. This may be due to the level of simplicity at the novice-low level. The inclusion of the most basic levels of the can-do list was based on the premise of raising levels of self-confidence for students to communicate in the target language and to ensure they gained a formidable grasp of the routine of the activity. Peer and self-evaluation scores for the amount of English used at the novice-low level were 9.43 and 9.19, respectively. These scores went down a bit (9.05 and 8.90 at the intermediate-low level) as students progressed to more sophisticated tasks. Scores, however, increased by the culminating intermediate-high level to scores that were slightly higher than the initial ones. The difference between peer and self-evaluation scores was minimal, and the standard deviation was normal. It is interesting to note that as students progressed to more advanced levels of the can-do framework and faced increasingly difficult tasks, the amount of speaking mirrored the scoring pattern noted above for the amount of English used:

Table 2: Amount of Speaking at Each Level: Peer & Self-Assessment

Peer and self-evaluation scores for the amount of speaking at the novice-low level were 8.22 and 8.38, respectively. Scores went down a bit until a point shortly after the intermediate-low level (9.05 and 8.90), likely due to the increasing complexity of the tasks. However, scores began to rise thereafter and were again slightly higher than the initial scores by the end of the intermediate-high level tasks. The difference between peer and self-evaluation scores remained minimal throughout. Standard deviation was slightly elevated, but still within a normal range. Students reported a fairly high level of task completion, although this was the task component in which they rated themselves most severely. Once again, their ratings dropped initially, before recovering at the uppermost level and ultimately surpassing scores from the initial level:

82

JLLT Volume 8 (2017) Issue 1

Table 3: Student Peer and Self-Assessment Intermediate High Level

Peer and self-evaluation scores for the completion of task at the novice-low level were 8.19 and 7.90, respectively. At the intermediate-low level, scores reduced to 7.81 and 7.71, and subsequently rebounded to 8.35 and 8.76 by the end of the intermediate-high level. The difference between peer and self-evaluation scores remained minimal throughout. Standard deviation was again slightly elevated, but still within a normal range. Peer and self-assessment ratings for the quality of speaking were relatively high, and they gradually increased after each rendition of the tasks at each level:

Table 4: Quality of Speaking at Each Level: Peer & Self-Assessment

At the novice-low level, peer and self-evaluation ratings were 8.22 and 8.05, respectively. Scores increased (8.56 and 8.43) by the intermediate-low level, and continued to steadily rise through the intermediate-high level tasks, where the final scores were 8.97 and 9.05, respectively. The difference between peer and selfevaluation scores was very small throughout the tasks, and the standard deviation was within a normal range.

4.2 Points of Assessment Another item highlighting self-awareness, quantity of speaking versus silence, likewise added emphasis into a key area of concern for language teachers. There remains an eerie familiarity for many EFL teachers in witnessing students sitting in silence during communication activities. It is often the case that these students have completed the stated goal of the activity, but often with minimal language use. Carless (2004) and Lee (2005) observed that students’ output achieved only 83

JLLT Volume 8 (2017) Issue 1

minimal language use in their investigations of task-based instruction (as cited in Littlewood 2007: 245). In the observed communication activities, students in both studies produced only the minimal level of language needed to complete each task. Therefore, the purpose of including an evaluative measure for speaking versus silence was to encourage students to fill in the designated time of the activity with language. Extending responses beyond short utterances became an ongoing communicative goal for students in completing each task. In regard to the item degree of completing the task, it was included to provide numerical values for task completion. Surprisingly, the NCSSFL-ACTFL Can-Do Statements and other frameworks are systematically arranged with an all-ornothing approach that typically consists of a check-off list with only yes or no responses. The inclusion of numerical values in the communication activities provide learners with a more precise measurement scale to assess the degree of success in completing any given task. As for quality of delivery, it is quite possibly the most controversial item on the scale. Although the accuracy of assessing the ‘quality’ of speech for a L2 learner may be a daunting task, the inclusion of this item may serve best as a motivational tool of support; that is, assuming feedback is generally positive. Students in our classes were often enthusiastic and less critical of one another. This particular item was not necessarily used as a means of acquiring raw data on the actual speaking performance of students; rather, it was a positive reinforcing mechanism to show support to one another.

4.3 Progress and Developments Throughout the two-month duration that the communication activities were integrated in class, both self and peer evaluations were recorded and monitored. As previously mentioned, communication activities and assessment occurred within groups of three students. Although details of individual assessment categories have been discussed earlier, an overview of general findings will be presented here. Three of the areas being assessed had similar patterns in responses throughout the duration of the activities. The overall results displayed a U-shaped pattern among three of the areas investigated: the amount of speaking, the amount of English, and the completion of tasks. In the first few weeks, students engaged in novice-level communication activities. Since the can-do communication activity included the full realm of descriptor levels, the initial weeks consisted of activities that were at the beginner level and easily attainable for students. As a result, ratings tended to be high. After several weeks, scores began to drop in three categories. Here, the U-shape pattern began to take shape. The content of the communicative tasks entered a higher degree of difficulty. The intermediate-mid marks the point when the dip in results was at its lowest. In essence, a significant portion of students seemed to have difficulty in completing the tasks. A drop was 84

JLLT Volume 8 (2017) Issue 1

expected as the degree of difficulty gradually increased in each ensuing week. However, it was unexpected to find that the scores began to gradually increase in the last few weeks as the study neared completion. It is unclear as to the reason for this increase in assessment. Students may have begun to adapt to the routine of the activity, to anticipate the expected difficulties and to handle the areas of assessments more effectively. The general U-shaped patterns found in this study have been observed in several other studies in second language acquisition (Shirai 1990). The range, content, variables, and focus of these studies are too widespread to consider. Still, in the realm of second language acquisition, it is customary for researchers to investigate aspects of learning, such as facing difficulties and overcoming them. In this sense, the students engaged in the cando communicative tasks likewise faced difficulties initially and seemed to overcome them. As for the initial high scores, the educational routine of providing students with an obtainable entry communicative element is commonplace and could offer an explanation for this occurrence. Nevertheless, this is a more or less speculative observation that would demand further investigation. Among the four assessment areas evaluated, the quality of speech was unique in maintaining a relatively high rating throughout each level of the can-do communication activities. Although the factors that have contributed to this outcome have not been precisely identified, there are several possibilities to consider. The quality of speech is a subjective area that cannot be easily evaluated. It may be important to consider the cultural element of collectivism, as a potential influential variable, whereby students may consider socialization as being more important than the accuracy of a score evaluating a student’s quality of speech. Nevertheless, the slight differences between peer and self-evaluations for this task component indicate that a considerable degree of consensus existed between students as to what constitutes the notion quality of speech, and further inquiries should consider potential factors influencing these results:

Table 5: Comparison of Student Peer and Self-Assessment Scores at Each Level by Task

5 Discussion Class observations of students engaged in the activities revealed a degree of excitement in the group dynamics. As students were periodically reminded that tasks would gradually become more challenging at each successive level, they 85

JLLT Volume 8 (2017) Issue 1

became aware that they were progressing in a straight and systematic direction. This seemed to be a motivating factor for students who are typically exposed to activities with ambiguous goals or purpose. It also seemed that students were interested in finding out specifically what they were able to communicate in English and what level they were able to successfully achieve. Students appeared strongly motivated to attempt each and every task. Since students were able to gain an immediate understanding of the basic directives and soon became well-adjusted to the routine of the activity, minimal instructions before each activity saved valuable class time. At the beginning of each activity, students were given preparation time to briefly contemplate the task beforehand. For instance, at the novice-high level students were given a preparation time of 30 seconds to contemplate the task before speaking. This preparation component was a critical and effective addition to the speaking tasks. It was also important to condition them to achieve the desired goal of spontaneous conversation with little or no preparation time. Therefore, the preparation time was gradually shortened at each successive level. At the beginning of each activity, the teacher would read the task aloud from directions that were displayed on a large screen. As each student would finish a task, followed by peer / self-assessment ratings, another student would immediately prepare for the next task. In order to avoid monotonous redundancy with identical content, each student was given a different communicative task from other members in their group. Tasks were slightly altered to provide variation while maintaining the difficulty level of the targeted communication objective. For instance, at the novice-high level, students were asked to give simple directions. As such, adjustments in the locations (e.g., cafeteria, gym) were sufficient enough to provide some variety while maintaining the objectives of the can-do level. The communication activities were effective in several respects. For instance, the establishment of a set routine reduced confusion among students and eliminated the need for time-consuming explanations that can often entail a significant portion of class time (e.g., Sinclair & Brazil 1982). Since these activities were integrated into classes for a two-month period, students became quite familiar with the procedure in the ensuing weeks. They seemed to anticipate each step of the process. For example, they would immediately place their scores on the assessment sheet once time expired in the communication activity. Soon afterwards, the next student in the group to attempt the communication task would attentively wait and prepare for the upcoming activity. In essence, the activity became systematic and predictable for students from one step to the next. As one student mentioned, This activity is good for me since I am able to make a short conversation, and I can easily understand the task at hand.

A well-structured activity should be logically sequenced in a way that students can clearly understand (Richards 1987). As such, these routine activities minimized potential confusion in the classroom and provided students with more 86

JLLT Volume 8 (2017) Issue 1

opportunities to communicate in the L2. Practicing the target language with a direct and clear path towards gradual intensity was a motivating factor for many students. As advocated in numerous studies (e.g., Long & Porter 1985, Sato 2003, and Storch 2002) communication activities in small groups are advantageous for L2 learners. Accordingly, the construct of a small three-member group was helpful in enhancing motivation levels. The group activities were beneficial for most students as one of them described it as …allowing me to get to know more about my classmates

Another student felt the activities were helpful because he …became motivated by others in the group

In addition, the evaluation forms were helpful in not only providing students with instantaneous feedback following each task, but the evaluation items also highlighted critical concerns faced by language teachers in group communicative activities. These included the following:



raising self-awareness and subsequently self-control in the use of the L1 while participating in target-language activities,



emphasizing extended speaking rather than silence during the designated time for each task, and



providing a supportive assessment scale to enhance motivational levels of group members in measuring the quality of tasks.

The consciousness-raising effect of the feedback forms was able to effectively control L1 use during the activities as one student mentioned: …while doing the activity, I try to use English only, even when some members had a hard time understanding me, but once the activity is finished, I used Japanese to explain to them what I wanted to say.

The activities were praised by one student for providing …good opportunities to speak English.

While another student stated that the …group talk is a good time for me to express my opinions and to practice speaking English.

87

JLLT Volume 8 (2017) Issue 1

The can-do communication activities helped promote a positive and dynamic atmosphere for students to practice their English language skills.

6 Conclusion The modification of the NCSSFL-ACTFL Can-Do Statements into a routine and systematic group communication activity may offer teachers a more practical way of utilizing framework descriptors for language class. Contemplating one’s language proficiency from a collection of descriptors may offer some benefit to L2 learners, but transforming the framework from a checklist into an interactive communication activity may arguably be a more effective means for enhancing the learning experience. In interviews and questionnaires that were conducted at the conclusion of this two-month project, students indicated overwhelmingly positive responses. Additionally, the inclusion of a feedback scheme incorporating a number of aspects that promoted motivational support and reinforced awareness of potential obstacles during communication (e.g., silence and L1 use) may help fulfill the needs of teachers searching for feasible alternatives to integrate the cando framework component into language classes. The reduction of L1 use and silence during the activities was achievable with the utilization of a feedback construct that promoted self-awareness of these communicative objectives. The communication activities were effective in the sense that despite an extended twomonth period with tasks increasing in difficulty for speakers with a shared L1, students were able to complete the tasks with a minimal amount of L1 use and fill the void of task time with continual speaking. In regard to the completion of tasks, the scores were fairly high overall. However, as in the pattern of scoring among most of the other areas investigated, there was a drop in self and peer ratings initially followed by a late recovery of higher ratings at the last stages of the communication activity (Table 3 above). Although these levels varied among participants, this was a reoccurring pattern. Among the feedback items, it is important to note that the ratings on the quality of delivery had an underlying purpose. Accuracy in calculating speaker performance was not a primary concern; rather, this item served as a supportive tool. Students were instructed to be less critical and more constructive in the rating of this item. The purpose of its inclusions was to provide a means of peer encouragement and support. Nevertheless, it is interesting to note that self and peer feedback assessments were similarly rated across all can-do levels. The transformation of the NCSSFL-ACTFL Can-Do Statements into class activities could have a positive effect in addressing practical communication needs while fulfilling requirements to include a language framework construct.

88

JLLT Volume 8 (2017) Issue 1

References American Council for the Teaching of Foreign Languages (2013). NCSSFL-ACTFL CanDo Statements: Progress Indicators for Language Learners. Alexandria, VA: ACTFL. (http://www.actfl.org/sites/default/files/pdfs/Can-Do_Statements.pdf; 14.09.2016). Brown, A. (2005). Self-assessment of Writing in Independent Language Learning Programs: The Value of Annotated Samples. In: Assessing Writing 10 (2005) 3, 174191. Carless, D. (2004). Issues in Teachers’ Reinterpretation of a Task-based Innovation in Primary Schools. In: TESOL Quarterly 38 (2004) 4, 639–662. Eguchi, M. & K. Eguchi (2006). The Limited Effect of PBL on EFL Learners: A Case Study of English Magazine Projects. In: Asian EFL Journal 8 (2006) 3, 207-225. Janulevièienë, V. & G. Kavaliauskienë (2011). Self-assessment of Vocabulary and Relevant Language Skills for Evaluation Purposes. In: Coactivity: Philology, Educology/Santalka: Filologija, Edukologija 15 (2011) 4,10-15. Lee, S.-M. (2005). The Pros and Cons of Task-based Instruction in Elementary English Classes. In: English Teaching 60 (2005) 2, 185–205. Littlewood, W. (2007). Communicative and Task-based Language Teaching in East Asian Classrooms. In: Language Teaching 40 (2007) 3, 243-249. Long, M. et al. (1976). Doing Things with Words: Verbal Interaction in Lockstep and Small Group Classroom Situations. In: Fanselow, J., R. Crymes (Eds.) (1976). In: On TESOL ’76. Washington, DC: TESOL, 137-153. Richards, J. C. (1987). The Dilemma of Teacher Education in TESOL. In: TESOL Quarterly 21 (1987) 2, 209-226. Sato, K. (2003). Improving our Students’ Speaking Skills: Using Selective Error Correction and Group Work to Reduce Anxiety and Encourage Real Communication. Akita, Japan: Prefectural Akita Senior High School. ERIC Document Reproduction Service No. ED 475 518. Saito, H. & T. Fujita (2004). Characteristics and User Acceptance of Peer Rating in EFL Writing Classrooms. In: Language Teaching Research 8 (2004) 1, 31-54. Shirai, Y. (1990). U-shaped Behavior in L2 Acquisition. In: Burmeister, H. & P. L. Rounds (Eds.) (1990). Variability in Second Language Acquisition: Proceedings of the Tenth Meeting of the Second Language Research Forum, Vol. 2. Eugene, OR: University of Oregon. Department of Linguistics, 685-700. Sinclair, J. McH & D. Brazil. (1985). Teacher Talk. London: Oxford University Press. Storch, N. (2002). Patterns of Interaction in ESL Pair Work. In: Language Learning 52 (2002) 1, 119-158. Todd, R. W. (2002). Using Self-assessment for Evaluation. In: English Forum 40 (2002) 1, 16-19. (http://exchanges.state.gov/forum/; 12-05-2016).

89

JLLT Volume 8 (2017) Issue 1 Topping, K. (1998). Peer Assessment between Students in Colleges and Universities. In: Review of Educational Research 68 (1998) 3, 249-276.

Authors: Norman Fewell Senior Associate Professor Meio University College of International Studies Okinawa Japan Email: [email protected] George MacLean Professor University of the Ryukyus Global Education Center Foreign Language Unit Okinawa Japan Email: [email protected]

90

JLLT Volume 8 (2017) Issue 1

The Development and Validation of a Teacher Assessment Literacy Scale: A Trial Report Kay Cheng Soh & Limei Zhang (both Singapore)

Abstract Teachers share a similar responsibility with healthcare professionals in the need to interpret assessment results. Interest in teacher assessment literacy has a short history but Jas gained momentum in the recent years. There are not many instruments for measuring this important professional capability. The present study presents the results of trailed Teacher Assessment Literacy Scales which covers four essential aspects of educational measurement. Both classical and Rasch analyses were conducted with encouraging psychometric qualities. Keywords: Assessment, Assessment literacy, Measurement, Testing

What most of today's educators know about education assessment would fit comfortably inside a kindergartner's half-filled milk carton. This is astonishing in light of the fact that during the last 25 years, educator competence has been increasingly determined by student performance on various large-scale examinations… A profession's adequacy is being judged on the basis of tools that the profession's members don't understand. This situation is analogous to asking doctors and nurses to do their jobs without knowing how to interpret patient charts… (Popham, 2006: para. 1; emphases added)

1 Introduction There is no better way to emphasize the importance of assessment literacy than to quote Popham (2006). About one decade ago, Popham (2006) brought up his very apt analogy between educational and healthcare professions where the proper use of test information is crucial. Not only do teachers need assessment literacy but everyone else who has an interest in education and everyone includes teachers, school leaders, policy-makers, and parents. 91

JLLT Volume 8 (2017) Issue 1

In the past, patients were passive recipients of medical treatments. Present-day patients are involved in the healing process; they are informed and they are engaged. Analogously, in the past, student assessment tools were crafted by test specialists while teachers were passive users. This is true at least in the American context where standardized tests are the regular fixture of the school. Nowadays, with the emphasis on assessment for learning (or formative assessment) in contrast with assessment of learning (or summative assessment), teachers, in America and elsewhere, are expected to use assessment in a more engaged manner to help students learn. Teachers are therefore expected to use test information not only for assessment of learning but also, perhaps more importantly, for assessment for learning. This shift all the more underlines the importance of teachers' assessment literacy if they were to complete this crucial aspect of their job with professionalism. Due to the change in the emphasis on formative assessment and its contribution to learning (Fulcher 2012), the notion of assessment literacy has changed contingently. Traditionally, assessment emphasizes objectivity and accuracy (Spolsky 1978, 1995), due to the influence of the psychometric and positivistic paradigms, and testing activities are normally carried out at the end of leaning periods (Gipps 1994, Wolf, Bixby, Glenn & Gardner 1991). In that context, only measurement specialists but not frontline teachers were expected to have specialized knowledge of test development, score interpretation, and theoretical concepts of measurement. In contrast, assessment is now perceived as an integral part of teaching and learning to provide timely information as feedback to guide further instruction and learning. This development requires teachers to design assessment, make use of test results to promote teaching and learning and be aware of inherent technical problems as well as the limitation of educational measurement (Fulcher 2012, Malone 2008). Thus, it is important that teachers have sufficient practical skills as well as theoretical understanding.

2 Assessment Literacy Measures 2.1 The Importance of Assessment Literacy Over the years, efforts have been made to measure teacher assessment literacy. Gotch & French (2014) systematically reviewed teacher assessment literacy measures within the context of contemporary teacher evaluation policy. The authors collected objective tests of assessment knowledge, teacher self-reports, and rubrics to evaluate teachers’ work in assessment literacy studies from 1991 to 2012. They then evaluated the psychometric work from these measures against a set of claims related to score interpretation and use. Across the 36 measures reviewed, they found weak support for these claims. This highlights the need for increased work on assessment literacy measures in the educational measurement field. Later, DeLuca, laPointe-McEwan & Luhange (2016) emphasized that assessment 92

JLLT Volume 8 (2017) Issue 1

literacy is a core professional requirement across educational systems and that measuring and supporting teachers’ assessment literacy have been a primary focus over the past two decades. At present, according to the authors, there is a multitude of assessment standards across the world and numerous assessment literacy measures representing different conceptions of assessment literacy. Later, DeLuca, laPointe-McEwan & Luhange analyzed assessment literacy standards from five English-speaking countries (Australia, Canada, New Zealand, the UK, and the USA) and Europe to understand shifts in the assess ment developed after 1990. Through a thematic analysis of 15 assessment standards and an examination of eight assessment literacy measures, the authors noticed shifts in standards over time though the majority of the measures continue being based on early conceptions of assessment literacy. Stiggins (1991) first coined the term assessment literacy to refer to teachers’ understanding of the differences between sound and unsound assessment procedures and the use of assessment outcomes. Teachers who are assessment literates should have a clear understanding of the purposes and targets of assessment, the competence in choosing appropriate assessment procedures, the capability of conducting assessment effectively and of avoiding pitfalls in the process of assessment practices and interpretation of results. This sounds simple but can be a tall order in actuality. For example, the by now classic textbook of educational measurement by Linn & Miller (2005) has altogether 19 Chapters in three parts. The five chapters in Part I cover such topics on the role of assessment, instructional goals of assessment, concepts of reliability and validity, and issues and trends. These may not be of immediate relevance to the classroom teachers’ work but provide necessary conceptual backgrounds for teachers to be informed assessors. Part II has 10 chapters of a technical or procedural nature, which equip teachers with the necessary practical skills in test design using a wide range of item formats. The ending Part III has four chapters, dealing with selecting and using published tests as well as the interpretation of scores involving basic statistical concepts. The three parts that made up the essential domains of assessment literacy expected of classroom teachers are typical of many education measurement texts that support teacher training programs. According to Popham (2009), increasing numbers of professional development programs have dealt with assessment literacy for teachers and administrators. Popham then asked the question of whether assessment literacy is merely a fashionable focus or whether it should be regarded as a significant area of professional development for years to come. Popham first divided educators’ measurement-related concerns into either classroom assessments or accountability assessments, and then argued that the educators’ inadequate knowledge about either of these can cripple the quality of education. He concluded that assessment literacy is a condicio sine qua non for today’s competent educator and must be a pivotal content area for current and future staff development. For this, 13 topics were set forth for the design of assessment literacy programs. He proposed a two-prone approach to solve the problem: until pre-service teacher education programs begin producing assessment-literate teachers, professional 93

JLLT Volume 8 (2017) Issue 1

developers must continue to rectify this omission in educators’ professional capabilities. In short, Popham sees assessment literacy as a commodity needed by teachers for their own long-term well-being, and for the educational well-being of their students. The topics proposed are as follows: 1. The fundamental function of educational assessment, namely, the collection of evidence from which inferences can be made about students’ skills, knowledge, and affect; 2. Reliability of educational assessments, especially the three forms in which consistency evidence is reported for groups of test-takers (stability, alternate-form, and internal consistency) and how to gauge consistency of assessment for individual test-takers; 3. The prominent role three types of validity evidence should play in the building of arguments to support the accuracy of test-based interpretations about students, namely, content-related, criterion-related, and constructrelated evidence; 4. How to identify and eliminate assessment bias that offends or unfairly penalizes test-takers because of personal characteristics such as race, gender, or socioeconomic status; 5. Construction and improvement of selected response and constructedresponse test items: 6. Scoring of students’ responses to constructed-response tests items, especially the distinctive contribution made by well-formed rubrics; 7. Development and scoring of performance assessments, portfolio assessments, exhibitions, peer assessments, and self-assessments; 8. Designing and implementing formative assessment procedures consonant with both research evidence and experience-based insights regarding the probable success of such procedures; 9. How to collect and interpret evidence of students’ attitudes, interests, and values; 10. Interpreting students’ performances on large-scale, standardized achievement and aptitude assessments; 11. Assessing English Language learners and students with disabilities; 12. How to appropriately (and not inappropriately) prepare students for highstakes tests; 13. How to determine the appropriateness of an accountability test for use in evaluating the quality of instruction.

94

JLLT Volume 8 (2017) Issue 1

The list seems overwhelming and demanding. However, it shows that teachers’ assessment literacy is a complex area of professional development which should be taken seriously if a high standard of professionalism is to be maintained. Nonetheless, Popham has a simple expectation, thus, When I refer to assessment literacy, I'm not thinking about a large collection of abstruse notions unrelated to the day-to-day decisions that educators face. On the contrary, assessment-literate educators need to understand a relatively small number of commonsense measurement fundamentals, not a stack of psychometric exotica. (Popham 2006: 5)

While Popham’s expectation is more realistic and palatable to busy classroom teachers, there is still a need for the specifics. For these, Plake & Impara (1993) developed the Teacher Assessment Literacy Questionnaire, which was later adapted for the Classroom Assessment Literacy Inventory (Mertler & Campbell 2005). It comprises 35 items measuring teachers’ general concepts about testing and assessment and teachers’ background information. These items were organized under seven scenarios featuring teachers who were facing various assessment-related decisions. The instrument was first trialed on 152 pre-service teachers and obtained an overall reliability of KR20=0.75, with an average item difficulty of F=0.64 and an average item discrimination of D=0.32. Using another sample of teachers, Mertler and Campbell (2005) found reliability of KR20=0.74, an average item difficulty of F=0.68, and an average item discrimination of D=0.31. In short, this instrument shows acceptable reliability, at least for research purposes, and is of moderate difficulty but low discrimination. In the adaptation of the Classroom Assessment Literacy Inventory, Mertler (2005) followed The Standards for Teacher Competence in the Educational Assessment of Students (AFT, NCME, & NEA 1990). Seven such standards were included, resulting in items measuring the following seven aspects of assessment literacy: 1. Choosing Appropriate Assessment Methods 2. Developing Appropriate Assessment Methods 3. Administering, Scoring, and Interpreting the Results of Assessments 4. Using Assessment Results to Make Decisions 5. Developing Valid Grading Procedures 6. Communicating Assessment Results 7. Recognizing Unethical or Illegal Practices It stands to reasons that these seven aspects are intimately relevant to teachers’ day-to-day assessment responsibilities and that it is reasonable to expect all teachers to be equipped with the attendant understanding and skills.

95

JLLT Volume 8 (2017) Issue 1

Later, Fulcher (2012), in England, developed a measure of assessment literacy with 23 closed-ended items measuring assessment literacy. The items cover teachers’ knowledge in test design and development, large-scale standardized testing, classroom testing and its wash-back, as well as validity and reliability. In addition, it also includes constructed-response items eliciting teachers’ feedback on their experience in assessment, and background information. Fulcher’s study involved 278 teachers, 85% of whom held a Master’s degree and 69% were female. Analysis of the quantitative data yielded a Cronbach’s α=0.93 (which is rather high) and a factor analysis with Varimax rotation returned with four orthogonal factors accounting for 52.3% of the variance. Although this is rather low, the four factors are interpretable: (1) Test Design and Development (17.9%), (2) Large-scale Standardized Testing (16.5%), (3) Classroom Testing and Washback (12.0%), and (4) Validity and Reliability (6.5%). The four factor scales have respectable Cronbach’s coefficients of α=0.89, α=0.86, α=0.79, and α=0.94, respectively.

2.2 Relevance to the Present Study The review of the pertinent literature on assessment literacy and its measurement has implications for the present study. First, in recent years, the Singapore Ministry of Education has launched initiatives emphasizing higher-order thinking skills and deep understanding in teaching, such as Teach Less, Learn More (TLLM) and Thinking Schools, Learning Nations (TSLN). Consequently, school teachers are required to make changes to their assessment practice and to equip themselves with sufficient assessment literacy. In spite of the importance of assessment literacy, few studies have been conducted to examine their assessment knowledge and skills. Among these few studies, Koh (2011) investigated the effects of professional development on Primary 4 and 5 teachers of English, Science and Mathematics. She found that on-going professional development of assessment literacy is especially effective in improving teachers’ assessment literacy, when compared with teachers who participated in workshops training them to design assessment rubrics. The findings suggest that to successfully develop teachers’ assessment literacy, the training needs be broad enough in the topics covered and it has to be extended over a reasonable period of time. In a more recent study, Zhang & Chin (under review) examined the learning needs in language assessment among 103 primary school Chinese Language teachers using an assessment literacy survey developed by Fulcher (2012). The results provide an understanding of teachers’ interest and knowledge in test design and development, large-scale testing, classroom testing, and test validity and reliability. With a very limited number of studies in the Singapore context, there is obviously a need for more studies to be carried out for a better understanding of Singapore school teachers’ assessment literacy. For carrying out such studies, it is necessary to develop an assessment literacy scale which is broad enough and 96

JLLT Volume 8 (2017) Issue 1

yet concise to measure the teachers’ assessment competence properly and accurately. Secondly, in systems like that of the USA where standardized tests are designed by test specialists through a long and arduous process of test development, applying sophisticated psychometric concepts and principles (with regular intermittent revisions), it is reasonable to assume that the resultant assessment tools made available for teachers are of a high psychometric quality. In this case, the most critical aspect of assessment literacy that teachers need is the ability to properly interpret the results they obtain through the tests. Measurement knowledge beyond this level is good to have but not really needed. However, in a system like that of Singapore where standardized tests are not an omnipresent fixture, teacher-made tests are almost the only assessment tool available. This indicates the teachers’ need for assessment literacy of a much broader range, going beyond the interpretation of test results. Therefore, the present study aims to develop an instrument for assessment literacy to measure teachers’ assessment literacy in the Singapore context. Thirdly, previous studies have provided the framework for researchers to follow in designing an assessment literacy scale. As one of the most influential studies in language assessment literacy, Fulcher (2012) has expanded the definition of assessment literacy. According to him, assessment literacy comprises knowledge on three levels:



Level 1 concerns the knowledge, skills, and abilities in the practice of assessment, especially in terms of test design. Specifically, this type of knowledge includes how to decide what to test, writing test items and tasks, developing writing test specifications, and developing rating scales.



Level 2 refers to the processes, principles and concepts of assessment, which are more relevant to quality standards and research. This type of knowledge includes validity, reliability, fairness, accommodation, washback / consequences as well as ethics and justice of assessment.



Level 3 is about the historical, social, political, philosophical and ethical frameworks of assessment.

Following Fulcher’s (2012) framework, we aim to measure teachers’ assessment knowledge of two aspects: (1) their knowledge, skills, and abilities in assessment practice as well as the fundamental principles and (2) concepts of language assessment. This does not mean that we do not value the third domain (Level 3) but that we consider this as not being so urgently needed by teachers in Singapore and as not being so critical to their day-to-day use of assessment in the classroom context.

97

JLLT Volume 8 (2017) Issue 1

3 Method 3.1 Design In the development of the Teacher Assessment Literacy Scale, consultation was made to two classics of educational measurement, namely Educational and Psychological Measurement and Evaluation (Hopkins 1998) and Measurement and Assessment in Teaching (Linn & Miller 2005). The first decision to be made was to identify and delimit the domains to be covered in the scale. After consulting the two classics, it was decided that four key domains needed to be represented in the new measure. Firstly, teachers need to develop an understanding of the nature and functions of assessment if they are to do their work with reflection and in an informed way. Such understanding enables them to know why they are doing what they do or are expected to do with meaningful purposes. Secondly, teachers need the practical skills to design and use a variety of item formats to meet the instructional needs which vary with the content and students' abilities. Such skills may not be required when standardized tests are available for summative assessment, but they are critical in the present-day context when teachers are expected to craft their own tests to monitor student learning for formative assessment. Thirdly, once teachers have obtained test results, they need to be able to properly interpret these to inform further teaching and guide student learning. Obtaining test scores without being able to properly interpret them is analogous to the situation, depicted by Popham (2006), in which health professionals are unable to interpret patient charts. Finally, teachers need to be able to evaluate the qualities of the test results, and this entails basic knowledge of statistics. This, unfortunately, is what many teachers try to avoid, mainly due to a lack of training. Such knowledge enable teachers to see assessment results in a proper light, knowing their functions as well as their limitations in terms of measurement errors which are an inherent part of educational measurement. Without an understanding of such concepts of reliability and validity, teachers tend to take assessment results too literally and may make unwarranted decisions (Soh 2011 2016). Thus, we intended to construct the scale to measure teachers’ skills and abilities as well as their understanding of principles and concepts in assessment as shown in Figure 1. It is expected that this scale will provide a measure of language teachers’ assessment literacy in Singapore and similar contexts. In the following part of this article, we will describe the processes of the development of the Teacher Assessment Literacy Scale and seek to provide evidence for its validity:

98

JLLT Volume 8 (2017) Issue 1

Figure 1: Modelling Teacher’s Assessment Literacy

Against the background of the above considerations, it was decided that ten items be formulated for each domain as a sample representing the possible items of the domain. Four domains having been delimited, the whole scale thus comprises 40 items. It was further decided that there should be four-option-multiple-choice items so as to ensure objectivity in scoring and keep the testing time within a reasonable limit of about 30 minutes.

3.2 Trial Sample The scale thus crafted was then administered to 323 Chinese Language teachers, 170 from primary schools and 153 from secondary schools and junior colleges. There is a female preponderance of 83%. Of the teachers, 52% had more than ten years of teaching experience. In terms of qualification, 93% held a university degree and 95% had completed professional training. However, only 48% reported that they had elected to study assessment in their pre-service training, and 78% acknowledged that they felt the need for more training in assessment. Teachers attended various in-service courses at the Singapore Centre for Chinese Language from January to March 2015. Admittedly, they might not form a random sample of Chinese language teachers in Singapore and hence, in this study, a convenience sample was used. However, at the time of study, there was an estimated number of 3000 Chinese language teachers in Singapore. According to Kenpro (2012:), a sample size of 341 is needed to adequately represent a population of 3000. Thus, the 323 teachers of this study are close to the expected sample size, 93% in fact. Moreover, the participating teachers can be considered as mature in the profession with more than half of them having 10 or more years of teaching experience. Moreover, the female preponderance is typical of the teaching profession in Singapore. Thus, bearing in mind some limitations in these regards, the participating Chinese language teachers can be deemed sufficiently representative of Chinese language teachers in Singapore.

99

JLLT Volume 8 (2017) Issue 1

3.3 Analysis Three types of analysis were conducted in the present study: factor analysis, classical item analysis, and Rasch scaling. Confirmatory factor analysis was performed to examine whether the collected data support the proposed model (Figure 1) of assessment literacy with the four specified dimensions. Next, the classical item analysis was performed to obtain item difficulty (p) and item discrimination (r). Item difficulty indicates the proportion of teachers who chose the keyed answer and thus responded correctly to the respective item. This has also been referred to as facility or the F-index which, ironically, indicates the easiness of an item. Item discrimination is an indication of how well an item differentiates between teachers who have chosen the correct answer and those who have not. Statistically, it is the item-total correlation that indicates the consistency between overall ability (in terms of total score) and response to an item. Then, the Rasch analysis was conducted to estimate item locations to indicate item difficulty within the context of the whole set of items analyzed, with a positive index indicating the difficulty to answer correctly and vice versa; this is just the opposite of the classical F-index. The results of the Rasch analysis were used to correlate with the classical item-analysis results of the Findex for an indication of the efficacy of the two analyses.

4 Results 4.1 Descriptive Statistics Descriptive statistics were first calculated at the subscale level to show the performance of teachers in the four domains of assessment literacy. As Table 1 shows, the means for the subscales vary between 5.52 and 2.97, out of 10. The highest mean is for the first subscale (nature & function of assessment) while the lowest is for the fourth subscale (concepts of reliability, validity, etc.). Generally, the means show that teachers were able to answer correctly about half of the 30 questions in subscales 1 to 3, but that they were able to answer correctly only about three of the ten questions in subscale 4. If a criterion-referenced approach requiring 90 percent of the teachers to be able to answer correctly 90 percent of the questions, the results obtained are far from satisfactory: Subscale

Number of items

M (SD)

Nature & function of assessment

10

5.53 (1.23)

Design & use of test items

10

4.87 (1.38)

Interpretation of test results

10

4.50 (1.58)

Concepts of reliability, validity etc.

10

2.97 (1.50)

Table 1: Descriptive Statistics and Reliability Estimates at the Subscale Level 100

JLLT Volume 8 (2017) Issue 1

4.2 Confirmatory Factor Analysis Confirmatory factor analysis was run to verify the proposed structure assessment literacy. The model tested indicated good fit as shown in Table 2. The incremental fit index CFI (1.00) is greater than .95 while the absolute fit index RMSEA (.001) is less than .06 (Hu & Bentler 1999). The RMSEA 95% confidence interval is narrow. X2/df (.57) is less than 3.00 (Kline, 2011):

X2

df

X2/df

CFI

RMSEA

RMSEA 95%CI

1.13

2

.57

1.00

.001

.000-.093

Table 2: Fit Indices of the CFA Model

Figure 2 presents the tested model with the estimated coefficients. The path coefficients of assessment literacy range from .21 to .49 (p<.05) and average at . 39, indicating that the latent variable is well-defined by the four variables. However, of the four path coefficients, those for the first three subscales are sizable, varying from .41 to .49, but that for the fourth subscale, which deals with statistical and measurement concepts, is rather low at .21, indicating that the other three subscales are better measures of assessment literacy:

Figure 2: The Tested CFA Model of Assessment Literacy

4.3 Classical Item Analysis Classical item analysis focuses on the test as a whole and on item indices (Facility and Discrimination) in a deterministic sense. The item indices thus obtained are sample-specific in that they will take different values when the items are trailed on a different sample. The classical item analysis has three focuses: item difficulty, item discrimination, and score reliability.

101

JLLT Volume 8 (2017) Issue 1

Item difficulty (p) is the proportion of teachers who chose the keyed option correctly. In fact, item difficulty has also been referred to as item facility or easiness since larger proportion denotes more correct responses. Hence, it is also referred to as the F-index (Facility Index). Item discrimination (r) is the correlation between the correct response to an item and the total score for the test as a whole. This is, in fact, item-total correlation, which indicates the extent to which an item is able to differentiate between highand low-scoring teachers. Hence, it is also referred to as the D-index (Discrimination Index). These two item indices are shown in Tables 3 to 6 for the 40 items and the four subtests, respectively. The question of score reliability is discussed in a later Section 5.1.

4.3.1 Subtest 1: Nature and Functions of Assessment Table 3 presents the item indices for Subtest 1 which deal with understanding the functions that assessment has in teaching and learning, and concepts related to the norm- and criterion-referenced interpretation of test scores. The p’s vary from a very low 0.07 to a very high 0.94, with a mean of 0.55 (median 0.56). In short, the items vary widely in difficulty although the mean suggests that this subtest is moderately difficult as a whole. At the same time, the r’s vary from 0.13 to 0.33, with a mean of 0.23 (median 0.22). These figures indicate that the items have a low though acceptable discriminatory power:

Items

p

r

Item 1

0.93

0.22

Item 2

0.85

0.32

Item 3

0.10

0.17

Item 4

0.27

0.21

Item 5

0.93

0.3

Item 6

0.94

0.33

Item 7

0.92

0.29

Item 8

0.27

0.19

Item 9

0.26

0.14

Item10

0.07

0.13

Mean

0.55

0.23

Median

0.56

0.22

Table 3: Item indices for Subtest 1: Nature and Function of Assessment

102

JLLT Volume 8 (2017) Issue 1

4.3.2 Subtest 2: Design and Use of Test Items The items of Subtest 2 deal with the understanding of the suitability of various item formats and their appropriate uses. The p’s vary from a low 0.13 to a very high 0.96, with a mean of 0.49 (median 0.44). In short, these items vary widely in difficulty although the mean suggests that this subtest is moderately difficult as a whole. At the same time, the r’s vary from 0.11 to 0.30, with a mean of 0.21 (median 0.22). These results indicate that the items have a low though acceptable discrimination power:

Items

p

r

Item 11

0.5

0.24

Item 12

0.76

0.28

Item 13

0.29

0.11

Item 14

0.33

0.2

Item 15

0.50

0.16

Item 16

0.13

0.19

Item 17

0.37

0.11

Item 18

0.87

0.26

Item 19

0.96

0.30

Item 20

0.16

0.23

Mean

0.49

0.21

Median

0.44

0.22

Table 4: Item indices for Subtest 2: Design and Use of Test Items

4.3.3 Subtest 3: Interpretation of Test Results The items of Subtest 3 pertain knowledge of item indices and meanings of test scores. The p’s vary from a very low 0.03 to a high 0.78, with a mean of 0.45 (median 0.51). These figures indicate that the items vary widely in difficulty although the mean suggests that this subtest is of moderate difficulty. At the same time, the r’s vary from 0.05 to 0.47, with a mean of 0.24 (median 0.23). These results indicate that the subtest as a whole has acceptable discrimination:

103

JLLT Volume 8 (2017) Issue 1

Items Item 21 Item 22 Item 23 Item 24 Item 25 Item 26 Item 27 Item 28 Item 29 Item 30 Mean Median

p 0.14 0.05 0.39 0.64 0.75 0.69 0.43 0.59 0.03 0.78 0.45 0.51

r 0.05 0.05 0.23 0.24 0.18 0.47 0.22 0.39 0.1 0.43 0.24 0.23

Table 5: Item Indices for Subtest 2: Design and Use of Test Items

4.3.4 Subtest 4: Concepts of Reliability, Validity and Basic Statistics Subtest 4 deals with abstract concepts of test score qualities and knowledge of simple statistics essential to understand test results. The p’s vary from a low 0.11 to a high 0.64, with a mean of 0.30 (median 0.25). These figures indicate that the items are difficult ones when compared with those of the other three subtests. The r’s vary from 0.05 to 0.36, with a mean of 0.19 (median 0.17). These results indicate that the subtest as a whole has low discrimination: Items Item 31 Item 32 Item 33 Item 34 Item 35 Item 36 Item 37 Item 38 Item 39 Item 40 Mean Median

p 0.13 0.21 0.49 0.15 0.11 0.56 0.29 0.28 0.64 0.12 0.3 0.25

r 0.11 0.22 0.26 0.13 0.05 0.36 0.13 0.21 0.3 0.1 0.19 0.17

Table 6: Item Indices for Subtest 4: Concepts of Reliability, Validity, and Basic Statistics

104

JLLT Volume 8 (2017) Issue 1

On average, the 40 items of the scale generally have an acceptable level of facility, and this means that they are neither too easy nor too difficult for the teachers involved in this study. However, the items tend be have a low discrimination power. These findings could, at least partly, account for the fact that the teachers taking part in this study have discerned deficits in their assessment literacy, with an overall mean of 18 for the 40 items.

4.4 Rasch Analysis The Rasch analysis estimates the difficulty of an item in terms of the probability that teachers of given levels of ability will pass (or fail) an item, thus locating the item on a continuum of difficulty, hence, location. Rasch (1993) defines estimates for each item as an outcome of the linear probabilistic interaction of a person's ability and the difficulty of a given item. The goodness of fit of an item to the measurement model is evaluated with reference to its Outfit MSQ and Infit MSQ. However, there are no hard-and-fast rules for evaluating the fit statistics. A range of between 0.7 and 1.3 is recommended as the optimal goodness of fit of an item (Wright & Linacre 1994) for MCQ items. By this criterion, fit statistics lower than 0.7 suggest that the items are over-fitting (too predictable), and that those greater than 1.3 are under-fitting (too unpredictable). Table 7 shows the Outfit and Infit MSQs of the 40 items in descending order of item difficulty. As can be seen thereof, item estimates vary from -3.664 to 3.245, with a mean of 0.000 (median 0.260). These show that the items cover a wide range of difficulty, and the median indicates that the items as a set are somewhat on the difficult side of the scale. Table 7 also shows that the Infit MSQs vary between 0.690 and 1.146 (with a mean of 0.965 and a median 0.999) that and the Outfit MSQs vary between 0.859 and 1.068 (with a mean of 0.970 and a median of 0.978). These show that the item fit statistics all fall within the recommended 0.7-1.3 range, and therefore all 40 items of the scale fit the Rasch model well. In Table 7, the 40 items are classified into three groups of difficulty. The 'difficult' group comprises 11 items. These have facilities (p’s) less than 0.2, indicating that less than 20 percent of teachers were able to answer these items correctly. These items are separated from the rest by a natural gap in Rasch difficulties between 14.17 and 1.125. As the brief content shows, most of these difficult items deal with some quantitative aspects of test items (Items 21, 22, 29, 34, 35, and 40). The other items deal with diagnosis, function, assessing written expression, and above-level assessment. Generally, answering these questions correctly requires specific training in assessment, which many of the teachers do not have, especially with regards to those items which are quantitative in nature.

105

JLLT Volume 8 (2017) Issue 1

Item Number

Brief Content

Item Difficulty

Outfit MSQ

Infit MSQ

p

Difficult Items 29

Interpretation of a Tscore

3.245

0.961

0.930

0.03

22

Minimum passing rate for MCQ item

2.690

1.083

0.965

0.05

10

Diagnosing students’ learning difficulty

2.366

1.042

0.947

0.07

3

Educational function of assessment

1.968

0.908

0.949

0.10

35

Relations between reliability and validity

1.902

1.146

0.997

0.11

40

Basic statistical concept: correlation

1.809

1.003

0.985

0.12

16

Assessing written expression

1.693

0.906

0.954

0.13

31

Above-level assessment

1.693

0.987

0.993

0.13

21

Good MCQ F-index

1.639

1.142

1.017

0.14

34

Checking reliability of marking

1.561

1.034

0.983

0.15

20

Disadvantage of essay writing

1.417

0.907

0.952

0.16

Appropriate Items 32

Below-level assessment

1.125

1.025

0.960

0.21

9

Criterion-referenced testing

0.825

1.131

1.015

0.26

4

Direct function of assessment

0.777

1.001

0.988

0.27

8

Norm-referenced testing

0.777

0.984

1.002

0.27

38

Basic statistical concept: central tendencies

0.698

0.997

0.993

0.28

106

JLLT Volume 8 (2017) Issue 1

13

Language ability scrambled sentence

0.682

1.122

1.044

0.29

37

Basic statistical concept: skewed distribution

0.651

1.048

1.041

0.29

14

Reading comprehension ad validity

0.489

1.015

1.009

0.33

17

Weakness of objective items

0.321

1.064

1.068

0.37

23

Options of MCQ item

0.199

1.007

1.008

0.39

27

Concept of measurement error

0.028

1.021

1.012

0.43

33

Inter-rater reliability

-0.203

1.005

0.998

0.49

15

Use of cloze procedures

-0.241

1.059

1.052

0.50

11

Advantage of MCQ

-0.254

1.018

1.008

0.50

36

Basic statistical concept: mode

-0.485

0.931

0.942

0.56

28

Nature of the T-score

-0.629

0.905

0.921

0.59

24

D-index of MCQ item

-0.832

1.004

1.002

0.64

39

Basic statistical concept: standard deviation

-0.832

0.969

0.970

0.64

26

Interpretation of a test score

-1.062

0.830

0.867

0.69

25

Choice of topic to write

-1.398

1.035

1.013

0.75

12

Language ability and MCQ

-1.443

0.930

0.973

0.76

30

Difference between test scores

-1.579

0.801

0.875

0.78

Easy Items 2

Test results to help student

-2.009

0.859

0.917

0.85

18

Critical factor in written expression

-2.251

0.879

0.939

0.87

7

Most important of class tests

-2.741

0.801

0.921

0.92

1

Most important function of assessment

-2.876

0.869

0.952

0.93

107

JLLT Volume 8 (2017) Issue 1

5

Use of assessment results

-2.924

0.748

0.903

0.93

6

Best things to help students make progress

-3.142

0.690

0.880

0.94

19

Sex, race, and SES biases

-3.664

0.738

0.859

0.96

Summary Statistics Minimum

-3.664

0.690

0.859

-

Maximum

3.245

1.146

1.068

-

Mean

0.000

0.965

0.970

-

Median

0.260

0.999

0.978

-

Table 7: Item Estimates and Fit Statistics

At the other end, the 'easy' group of items has seven items which have facilities (p’s) greater than .80, indicating that 80% or more of the teachers answered them correctly. They are separated by a natural gap in Rasch difficulties, between -1.579 and -2.009. The content shows that these items deal with concepts which can be gained through experience and are commonsensical in nature. In between the two extreme groups, there are the 'appropriate' items - 22 items which have facilities (p’s) greater than .20 and less than .80. This means that between 20% and 80% of the teachers chose the correct answers for these items. Their Rasch difficulties span from 1.125 to -1.579. In terms of item content, only three are from Subtest 1 (Nature and Function) and most of the other items are 'easy' items. There are six items from Subtest 2 (Design and Use of Test Items), seven items from Subtest 3 (Interpretation of Test Results), and six items from Subtest 4 (Reliability, Validity, and Basic Statistics). These clearly show where the deficits in assessment literacy are among the teachers.

4.5 Correlations between Classical Facilities and Rasch Estimates A question that has often been asked is whether the two approaches (classical approach and the Rasch approach) to item analysis yield comparable results and, if they differ, to what extent. It is therefore interesting to note that when the classical p’s and the Rasch estimates of the 40 items were correlated, this resulted in r=|0.99|. This very high correlation coefficient indicates that the two approaches of item calibration yielded highly similar results, and this corroborated with many recent studies (e.g. Fan 1998, Magno 2009, Preito, Alonso & Lamacra 2003) which also report high correlations between classical item indices and 108

JLLT Volume 8 (2017) Issue 1

Rasch's method. For example, Fan (1998) used data from a large–scale evaluation of high school reading and mathematics from Texas, analyzed the correlations of 20 samples of 1,000 students each and obtained correlations between item indices of the two approaches, varying from r=|0.803| to r=0.920|. These suggest that the two sets of item estimates shared between 65 to 85 percent of common variance. Figure 3 is a scatter plot for the two types of indices of the present study, and the curve shows a near-perfect (negative) correlation:

Figure 3: Scatter Plot for p’s (X-axis) and Locations (Y-axis)

5 Discussion In the present study, the psychometric features and factorial validity of the newly developed Teachers’ Assessment Literacy Scale have been investigated. In the following section, we will first discuss the issue of reliability and then build validity argument based on the findings of the analyses.

5.1 Issue of Reliability The conventional method of assessing score reliability is the Cronbach’s alpha coefficient, which indicates the degree of internal consistency among the items, with the assumption that the items are homogeneous. The 40 items of the scale are scored 1 (right) or 0 (wrong), and therefore, the Kuder-Richardson Formula 20 (KR20), which is a special case of Cronbach’s alpha for dichotomous items, was calculated. Table 8 below shows the KR20 reliabilities of the four sub-scales and the scale as a whole. The reliability coefficients vary from KR20=.18 to .40, with a median of 109

JLLT Volume 8 (2017) Issue 1

0.36. Moreover, for the scale as a whole and the total sample of combined primary and secondary teachers, Cronbach’s internal consistency coefficient is α=0.471. These indices are generally low, compared with the conventional expectation of a minimum of 0.7. This definitely leads to the question of trustworthiness.

Measure

Primary

Secondary

7. Nature and functions of assessment

.28

.37

8. Design and use of test items

.34

.38

9. Interpretation of test results

.36

.18

10. Reliability, validity, etc.

.36

.40

Whole test

.34

.58

Table 8: KR20 Reliabilities

However, there have been criticisms on Cronbach’s alpha as a measure of item homogeneity or unidimensionality (Bademci 2006). One condition which might have led to the low reliabilities as shown in Table 8 is the heterogeneous nature of item content among the 40 items since they cover many different aspects of educational measurement, some being qualitative and others being quantitative in nature, even within a particular sub-test. This being the case, it renders the conventional reliability measures (i.e. Cronbach’s alpha and its equivalent KR20) which assume item homogeneity unsuitable for the purpose of the present study. Group homogeneity is another factor contributing to low score reliability. Pike & Hudson (nd: 149) discussed the limitations of using Cronbach’s Alpha (and its equivalent KR20) to estimate reliability when using a sample with homogeneous responses in the measured construct and described the risk of drawing the wrong conclusion that a new instrument may have poor reliability. They demonstrated the use of an alternate statistic that may serve as a cushion against such a situation and recommended the calculation of the Relative Alpha by considering the ratio between the standard error of measurement (SEM) which itself involves the reliability as shown in the formula, thus: SEM = SD*SQRT (1 – reliability) Pike & Hudson’s Relative Alpha can take a value between 0.0 and 1.0, indicating the extent to which the scores can be trusted, using an alternative way to evaluate score reliability. Their formula is:

110

JLLT Volume 8 (2017) Issue 1

Relative Alpha = 1 – SEM2/ (Range/6)2 In this formula, SEM is the usual indicator of the lack of trustworthiness of the scores obtained and, under normal circumstances, the scores for a scale will theoretically span over six standard deviations. Thus, the second term on the right is an indication of the proportion of test variance that is unreliable. Relative Alpha indicates the proportion of test-variance off-set for its unreliable portion, i.e., the proportion of test variance that is trustworthy. In the present study, the maximum possible score is 40, and the theoretically possible standard deviation is 6.67(=40/6). However, the actual data yields standard deviations of 4.24 (primary) and 4.66 (secondary) for the scale as a whole, which are 0.64 and 0.70, respectively, of the theoretical standard deviations. In other words, the two groups are found to be more homogeneous than theoretically expected. Table 9 shows the Relative Alphas for the primary and secondary groups. The sta tistics suggest that much of the test variance has been captured by the 40-item scale, and the scores can therefore be trusted: Measure

Primary

Secondary

4. Nature and functions of

.98

.97

.95

.95

6. Interpretation of test results

.94

.93

7. Reliability, validity, and

.95

.95

.97

.97

assessment

5. Design and use of test items

basic statistics Whole test

Table 9: Relative Alpha Coefficients

5.2 Validity Evidence Regarding content-referenced evidence, the scale was developed, based on a model resulting from an analysis of empirical data and a survey of relevant litera ture (Fulcher 2012). In addition, content analysis was conducted on the scale for a better content representation. The Rasch analysis provides further content referenced evidence. Substantive validity evidence refers to the relationship between the construct and the data observed (Wolfe & Smith 2007a, b). In the current study, the Rasch analysis, Infit and Outfit as well as the confirmatory factor analysis provide substantive referenced evidence. Also, the alignment of the 111

JLLT Volume 8 (2017) Issue 1

analysis based on classical test theory and the Rasch analysis supported the validity argument further. Apart from direct validity evidence, information beyond the test scores is needed to verify score validity. Ideally, the criterion scores for validity come from a test of application of measurement concepts and techniques, but such information is not available within the results obtained, although some of the 40 items of the scale are of this nature (for instance, those items on statistical concepts). However, indirect evidence of score validity is provided by the teachers’ responses to the open-ended question asking for comments and suggestions with regards to educational assessment. For the open-ended question, the teachers made 36 responses. Most of the responses reflect the teachers’ realization that assessment plays an important role in their teaching, for which specialized knowledge is needed. Examples of such responses are shown below: (1) What is taught and what is assessed should be consistent. (2) Teachers need to have knowledge of educational measurement. (3) Teachers need to popularize knowledge of assessment among the school leaders. (4) Teachers hope to gain knowledge of educational measurement to assess with indepth understanding.

(5) Without any knowledge of educational measurement, data analysis of results conducted in the school is superficial. (6) Teachers think that knowledge of education management is very much needed! (7) The survey will help to improve teaching.

The second type of responses (i.e., 8, 9, 10, and 11) reflects the difficulty that teachers had in understanding items which involve technical concepts and terminologies. Such responses are expected in view of the lack of more formal and intensive training in educational assessment. Examples of such responses are shown below: (8) Teachers are not familiar with the technical terms. (9) Some teachers do not understand technical terms. (10) Some teachers don’t understand some of the questions. (11) They do not know many mathematical terms.

The third type of responses (i.e., 12, 13) reflects the teachers' need to be convinced that assessment training is necessary for them to use assessment results properly as part of their instruction. Examples of such responses are 112

JLLT Volume 8 (2017) Issue 1

shown below: (12) They want to know if assessment can really raise students’ achievement and attitude and if it will add on the teachers’ work and be helpful to students? (13) They wonder if data help in formative assessment.

These responses reveal and re-affirm the teachers' test-taking attitudes when responding to the scale. Their seriousness with the scale is clearly evident. The second type of responses (i.e., 8, 9, 10, and 11) corroborates with the finding that teachers lack relevant specific training in educational assessment and hence found the technical terms and concepts unfamiliar. These findings truly reflect their situations and lack of knowledge. The third type of responses indicates the reservation and inquisitiveness of some of the teachers; this indirectly reflects that they have to be convinced that they need more training in educational measurement. Thus, when read in context, these responses provide indirect evidence of the validity of the scores.

6 Conclusions By way of summary, this article presents preliminary evidence of the psychometric quality, the content-referenced and substantive validity of the newly developed scale. As pointed out by Popham (2006), there is a similarity between the healthcare and teaching professions in that practitioners need to be able to properly read information about the people they serve as a prerequisite to what they intend and need to do. Thus, the importance of teachers’ assessment literacy cannot be over-emphasized. There is therefore a need for an instrument that can help gauge this crucial understanding and the skills of teachers. However, interest in this regard has rather a short history, and there are less than a handful of such measurement tools at our disposal at the moment. The new scale reported here is an attempt to fill the vacuum. It covers essential conceptual skills of educational measurement which are a need-to-know for teachers if they are to perform this aspect of their profession adequately. The new scale is found to be on the 'difficult' side, partly due to a lack of relevant training among the teachers who provided the data. However, it is encouraging that its items have also been found to fit the measurement model reasonably well. What needs be done from here on is to apply the scale to larger and more representative samples of teachers in varied contexts and subjects for its consolidation. In short, the current study is the alpha, far from being the omega. In addition, it should be pointed out that there are several limitations to this study. Most importantly, the homogeneity of the teachers (i.e. they were all Chinese Language teachers in Singapore) might detract the validity of the items and, hence, the scale as a whole. Further research should involve teachers of different subjects and geographical backgrounds. With more studies done with these considerations, teachers’ assessment literacy can be better measured and 113

JLLT Volume 8 (2017) Issue 1

investigated. In order to facilitate further research, the scale is available from the authors on request.

References American Federation of Teachers, National Council on Measurement in Education & National Education Association (1990). The Standards for Competence in the Educational Assessment of Students. (http://www.unl.edu/buros/article3.html.; 22-07-2003). Bademci, V. (2006). Cronbach’s Alpha Is not a Measure of Unidimensionality or Homogeneity. Paper Presented at the Conference “Paradigm Shift: Tests Are not Reliable”. Ankara: Gazi University. DeLuca, C., D. laPointe-McEwan & U. Luhange (2016). Teacher Assessment Literacy: A Review of International Standards and Measures. In: Educational Assessment, Evaluation and Accountability 28 (2016) 3, 251-272. Fan, X. (1998). Item Response Theory and Classical Test Theory: An Empirical Comparison of their Item/Person Statistics. In: Educational and Psychological Measurement 58 (1998) 3, 357-373. Fulcher, G. (2012). Language Literacy for the Language Classroom. In: Language Assessment Quarterly 9 (2012), 113-132. Gipps, C. (1994). Beyond Testing: Towards a Theory of Educational Assessment. London: Falmer Press. Gotch, C. M. & B. F. French (2014). A Systematic Review of Assessment Literacy Measures. In: Educational Measurement: Issues and Practice 33 (2014) 2, 14-18. Hopkins, K. D. (1998). Educational and Psychological Measurement and Evaluation. Needham Heights, MA: Allyn & Bacon. Kenpro Project Organization (2012). Sample Size Determined Using Krejcie and Morgan Table. (http://www.kenpro.org/sample-size-determination-using-krejcie-and-morgantable; 07-07-2017). Linn, R. L. & M. D. Miller (92005). Measurement and Assessment in Teaching. Upper Saddle River, NJ: Pearson, Merrill, Prentice Hall. Magno, C. (2009). Demonstrating the Difference between Classical Test Theory and Item Response Theory Using Derived Test Data. In: The International Journal of Educational and Psychological Assessment 1 (2009) 1, 1-11. Malone, M. (2008). Training in Language Assessment. In: Shohamy, E. & N. H. Hornberger (Eds.) (2008). Encyclopedia of Language and Education Vol. 7: Language Testing and Assessment. New York, NY: Springer, 273-284. Mertler, C. A. (2003). Preservice Versus Inservice Teachers’ Assessment Literacy: Does Classroom Experience Make a Difference? Paper Presented at the Annual Meeting of the Mid-Western Educational Research Association. Columbus, OH (Oct. 15–18, 2003). 114

JLLT Volume 8 (2017) Issue 1 Mertler, C. A. & C. Campbell (2005). Measuring Teachers’ Knowledge and Application of Classroom Assessment Concepts: Development of the Assessment Literacy Inventory. Paper Presented at the Annual Meeting of the American Educational Research Association. Montréal, Quebec, Canada (April 11–15, 2005). Pike, C. K. & W. W. Hudson (1998). Reliability and Measurement Error in the Presence of Homogeneity. In: Journal of Social Service Research 24 (1998) 1/2, 149-163. Plake, B. & J. C. Impara (1993). Assessment Competencies of Teachers: A National Survey. In: Educational Measurement: Issues and Practice 12 (1993) 4, 10-12. Popham, J. (2006). All about Accountability/Needed: A Dose of Assessment Literacy. In: Educational Leadership 63 (2006) 6, 84-85. (http://www.ascd.org/publications/educationalleadership/mar06/vol63/num06/Needed @-A-Dose-of-Assessment-Literacy.aspx; 07-07-2017). Popham, W. J. (2009). Assessment Literacy for Teachers: Faddish or Fundamental? In: Theory into Practice 48 (2009), 4-11. DOI: 10.1080/00405840802577536. Preito, L., J. Alonso & R. Lamarca (2003). Classical Test Theory Versus Rasch Analysis for Quality of Life Questionnaire Reduction. In: Health and Quality of Life Outcomes 27 (2003) 1. DOI: 10.186/1477-7525-1-27. Rasch, G. (1993). Probabilistic Models for Some Intelligence and Attainment Tests. Chicago: Mesa Press. Soh, K. (2016). Understanding Test and Exam Results Statistically: An Essential Guide for Teachers and School Leader. New York: Springer. Soh, K. C. (2011). Above-level Testing: Assumed Benefits and Consequences. Academic of Singapore Teachers. In: i.d.e.a2 2 (2011), 3 - 7. (http://www.academyofsingaporeteac hers. moe.gov.sg/ast/slot/u2597/images/IDEA2/IDEA2_Issue2.pdf; 07-07-2017). Spolsky, B. (1978). Introduction: Linguists and Language Testers. In: Spolsky, B. (Ed.) (1978). Approaches to Language Testing: Advances in Language Testing Series: 2. Arlington, VA: Center for Applied Linguistics, 5-10. Spolsky, B. (1995). Measured Words: The Development of Objective Language Testing. Oxford: Oxford University Press. Stiggins, R. J. (1991). Assessment Literacy. In: Phi Delta Kappan 72 (1991) 7, 534-539. Wolf, D. et al. (1991). To Use Their Minds Well: Investigating New Forms of Student Assessment. In: Review of Research in Education 17 (1991), 31-125. Wolfe, E. W. & E. V. Smith Jr. (2007a). Instrument Development Tools and Activities for Measure Validation Using Rasch Models: Part I — Instrument Development Tools. In: Journal of Applied Measurement 8 (2007), 97–123. Wolfe, E. W. & E. V. Smith Jr. (2007b). Instrument Development Tools and Activities for Measure Validation Using Rasch Models: Part II—Validation Activities. In: Journal of Applied Measurement 8 (2007), 204–233.

115

JLLT Volume 8 (2017) Issue 1 Wright, B. D. & J. M. Linsacre (1994). Reasonable Mean-square Fit Values. In: Rasch Measurement Transactions 8 (1994) 3, 370. (http://www.rasch.org/rmt/rmt83b.htm; 07- 07-2017).

Authors: Dr Kay Cheng Soh Research Consultant Nanyang Technological University Singapore Centre for Chinese Language, 287 Ghim Moh Road Singapore 279623 E-mail: [email protected] Dr Limei Zhang Lecturer Nanyang Technological University Singapore Centre for Chinese Language 287 Ghim Moh Road Singapore 279623 E-Mail: [email protected]

116

JLLT Volume 8 (2017) Issue 1

II. Book Review

JLLT Volume 8 (2017) Issue 1

JLLT Volume 8 (2017) Issue 1

Inez De Florio: Effective Teaching and Successful Learning. Bridging the Gap between Research and Practice. New York et al: Cambridge University Press 2016 (XII + 234pp.) (ISBN 978-1-107-53290-8). Effective Teaching and Successful Learning. Bridging the Gap between Research and Practice is a highly recommendable publication for researchers as well as prospective and in-service teachers all over the world, particularly in English speaking countries, such as the U.S., the UK, and Australia. The overall aim of this book is to enable teachers as well as other educational professionals to improve their daily practice, leading to more successful learning for all students. In a succinct introduction, the main features and types of educational research, especially newer findings of evidence-based education, are explained in a reader-friendly way. On this basis, the author provides a research- and valuebased approach to teaching and learning that takes the personality of teachers and students as well as the particular learning contexts into account. Learners’ needs and interests are the primary focus of the research-based Model of Effective Teaching (MET), which is described and exemplified in detail. While the number of teaching guides and connected lesson plans is growing rapidly, their quality is as diverse as their formats. Whether they are helpful for the teaching profession in general may also depend on the respective cultural as well as institutional contexts of different countries. But most of these publications share the style and scope of cookery books in one way or another (Lemov 2010). On the other hand, instructors are facing increasing demands in terms of work load, heterogeneous classes and educational concerns so that a majority of them will find it difficult to stay abreast of scientific research to make their teaching more effective and their students learn more successfully. Even the globally spread findings of Hattie’s (2009) meta- and mega-analyses, despite all their merits in making teaching effects more measurable and thus more accountable, cannot resolve the growing dilemma for instructors to cope with their day-to-day teaching and to incorporate even the more recent results of educational and neurobiological studies – never mind living up to Hattie’s proposals to view learning through their students’ eyes. It is the present publication by the German researcher Inez De Florio which precisely bridges “the Gap between Research and Practice”, as the subtitle of her book promises the adept reader and, as will be shown here, fully lives up to this challenge. In recent years, De Florio has engaged in qualitative and quantitative empirical research about questions of educational psychology, widening her research interest from studies into (foreign) language teaching and learning to substantial questions of all subject matters. Not only does her research, which can be read as a practical teacher manual without recipes, cover an overview of qualitative and quantitative research in educational strategies and interventions, particularly experimental studies in Randomized Control Trials (RCTs) and the impact of global meta- and mega-analyses, as made popular by John Hattie. Furthermore, and what is probably more important for instructors in-between theory and practice, “Effective Teaching” links lesson plan design first authored by Madeline Cheek Hunter (1976) to the concept of direct instruction, thus juxtaposing conventional teaching methods with the forms of interactive whole-class teaching. Concretising thirty steps of her Model of Effective Teaching (MET), De Florio is able to flesh out older models (Marzano 1998) in a way that is very helpful and digestible at the same time for any lesson planning and implementation. The real need for a teacherfriendly book bridging research and practice is attended to by the author’s ability to integrate evidence-based research and its practical implications in a succinct and comprehensible manner. 119

JLLT Volume 8 (2017) Issue 1 De Florio draws on relevant examples of scientific research in education and, at the same time, shows the practical consequences for effective teaching and successful learning. In this way, teachers are enabled to make informed decisions on the basis of research and methodology and to compare them with their own experience. Teachers are actively invited to reflect on traditional teaching and their own instruction routines. In the first part of her book (Chapters 1-5),, the author lays the ground for scienceoriented teaching and learning. Referring to eminent scholars and educationalists (Chapter 1) like Piaget (pp. 12), Vygotsky (pp. 16) and Bruner (pp. 19), practitioners are enticed to have a closer look at three foremost pioneers of educational research and discuss their findings as to whether they are still relevant today. In an introductory and fictional “conference talk” (9-10), questions of teaching habits are connected with these scholars and enriched by concepts of evidence-based research, pointing out newer strategies like reciprocal teaching or concept mapping. In this way, existing vague ideas about quantitative and qualitative research can be addressed and a systematic approach to science and research established. Piaget’s contribution to developmental psychology is presented in some detail (12), although his genetic stages of cognitive development are refuted (15). The long forgotten Russian psychologist and educationalist Vygotsky is, among his other achievements, remembered by his Zone of Proximal Development (ZPD) (18-19), and Bruner’s research into cognition and his ideas of a spiral curriculum lead to reflections of his considerations about classroom teaching and learning, such as empowering students to get a sense of the structure of deeper learning and the consequences for curriculum design (19). In this context, scaffolding is considered as a means to accelerate learning, being interchangeable with Vygotsky’s ZPD (21). In another – fictional – dialogue, John Dewey’s main ideas are presented (Chapter 2, (41), returning to his learning by doing as the recurrent hall mark of project work and referred to again later in the teacher guide as his message, when problem- / project-based approaches are combined with forms of collaborative leaning. In this second chapter, other ways of gaining scientific knowledge are discussed to show the full range of different types of scientific research on education (pp. 29 ff), including the character of theories, hypotheses and models (pp. 28), a closer look at research design and methodology (pp. 22 ff), the findings of psychometrics (pp. 35 ff), and the role of experiments (RCTs) (pp. 37), quasi-experiments (40) and correlation studies (41). The benefits – sometimes also shortcomings – of evidence-based research on education are presented in three subsequent chapters 3-5 (45-93), providing a succinct overview of relevant approaches and enabling practitioners to judge for themselves whether they are able to apply those results and findings to their school curriculum, teaching routines and professional experiences such as the importance of class sizes or the measurement of interventions and their impact on learning processes. This is done by drawing on an impressive variety from medical evidence through to the potentials and pitfalls of the aforementioned RCTs (37-41), and further including practical surveys like the Tennessee Class Size Project (52 ff), eventually leading to the globally received meta- and mega-studies by researchers like John Hattie (pp. 80). Whether the parallels drawn between evidencebased medicine and evidence-based education are completely convincing remains for the individual reader to decide – but they provide interesting food for thought – as does the entire book indeed. It remains a refreshing exercise to then be able to study in some details what De Florio phrases as “shortcomings of [Hattie’s] visible learning” (84-87) in that this discussion seems to be an ongoing event both in staff rooms and teaching institutions overall with supporters and opponents of Hattie’s mantras, such as teaching to DIE for (diagnose120

JLLT Volume 8 (2017) Issue 1 intervention-evaluation) (pp. 202 ff) or know thy impact (pp. 83) distributed almost equally. Whether Hattie can claim to have found the 'holy grail' of teaching and visible learning or whether this was just a marketing ploy of his publishers – as he has been heard to argue himself –, this teacher guide puts some of his findings into perspective, notwithstanding the fact that newer research into feedback has supported Hattie’s basic story that feedback, in its reciprocal and formative variety, is able to close the gap between where students are and where educators want them to be. Feedback occurs too little and too infrequently at our schools and needs to be much more differentiated, as De Florio points out, such as given by teachers to students, by students to students (peer feedback), and also given by students to teachers. As a conclusion, it can be said that teachers need to know what empirical research is all about and what the relevant premises entail to be able to evaluate research findings. Already in the structure of the individual chapters, the dialectics between theory and practice are expertly demonstrated and thus supersede most publications on similar topics, where either educational research is available for academic interests mainly or teaching models with little or no back-up from empirical research induce teachers to implement strategies that cannot be verified on scientific grounds. In the second part, De Florio describes classroom practice on the basis of her researchoriented teaching model. As mentioned above, it is the MET that links evidence-based theories of teaching and learning to classroom practice in thirty steps (Chapters 6-11; 94214). To really honour the outstanding merits of the MET, it is important to note that it comes less as a “teacher’s guide” rather than as a piece of advice to (re)consider teaching steps and classroom interventions in the light of thirty steps spanning the planning, preparation, implementation and evaluation of learning processes in a particular teaching context, which needs to be considered by instructors before the respective steps can be applied, extended, some omitted and enriched by their own and individual practice. As it is impossible to even try and apply all of the steps, the MET invites teachers to open their minds to what else might be advisable and possible in their particular classrooms without prescribing, appraising or validating individual steps. This selection has to be made by each instructor and in the process will already augment his or her teaching outcome in the aforementioned sense. Once, however, practitioners have familiarized themselves with the 30 steps of the MET and selected those strategies and interventions appropriate for their own teaching contexts, they might want to go back to the foundations of educational research leading to the assumptions and directives of the MET. They can also go forward in this teacher guide, going beyond the concise MET presentation (Chapter 6; 110-113), where research evidence and teacher expertise are brought together. The following chapters unfold the MET by focusing on planning, and starting a lesson (Chapter 7; 118-136), explaining, presenting and modelling new content (Chapter 8; 137156), and conceptionalizing guided and independent practice, gradually – as in the overall strategy of scaffolding, where this is called ‘fading’ – withdrawing teachers’ guidance and supervision, aiming at reinforcement and transfer of knowledge or skills and bringing the lesson to an appropriate conclusion (Chapter 9; 157-174). Cooperative and problembased forms of learning are at the centre of Chapter 10 (173-197), following Dewey’s concept of learning by doing (introduced earlier as one of the great educational thinkers and practical project planners in Chapter 2 (27-44), and underlining the importance of group cohesion as opposed to competition or individualistic learning. Although critical towards most of John Hattie’s findings and statistical process, De Florio 121

JLLT Volume 8 (2017) Issue 1 follows his belief in the overall importance of reciprocal and informative feedback outlined in Chapter 11 (198-214) and consequently draws on Hattie’s and Timperley’s Feedback Model (202-204) as in the “Flow of the Lesson” (202). The Concluding Remarks (215219) quite intentionally contain more questions than answers but postulate that “standards need more evidence” (215) and urge researchers and practitioners to further debate in how far standards are in accordance with results of evidence-based education, at all. As each chapter ends in a “review-reflect-practice” section, the teacher guide creates an additional direct access for many practitioners, beyond the practical aspects of the MET that stand as a value in themselves, as shown above. These sections can serve not only as a guide for further research and experiences but enable readers to gain a straightforward entry into the chapter topics, as the following examples will show:



Messages of Piaget, Vygotsky and Bruner are connected with today’s classroom issues (25-26);



Research design and methods can be discussed in the light of Dewey’s impact and the basic cultural categories of language systems (pp. 176 ff);



A summary of listed RCTs is supposed to be discussed for their value (40);



The question of whether meta-analyses can improve teaching or learning practice or not is correlated to Marzano’s list of instructional strategies (40);



Shortcomings of Hattie’s studies are turned around productively by the invitation to transform his “Personal Health Check” (90) into students’ questionnaires (92);



Lesson plan design and direct instruction are discussed in the light of learning theories and the SOLO (Structure of the Observed Learning Outcome) taxonomy;



Introductions of new learning contents are tried out as “hooks” – a phrase coined by Hattie’s model of direct instruction – in the classroom;



Steps of the MET are put to the test with textbooks or teaching units with a focus on assertive questioning;



Readers are invited to analyze textbooks with the aim to detect differences between exercises, tasks and learning activities, guided and independent practice;



Information on cooperative learning is extended, and



Feedback is focused upon as being reciprocal, formative and / or peer conducted.

Whereas, in particular, the “practice chapters” (6 and 7-11) make this book “unputdownable” for educators in all subjects, especially for teachers in junior and senior high school, in my own professional experience, I have rarely seen a more readable resource book for teaching processes and the underlying theoretical foundations. It is with great pleasure that I followed the “review-reflect-practice” sections, which empower one’s own learning curve and almost guarantee very attentive and, indeed, effective reading results. Another very practice-oriented feature are the ongoing summaries and definitions of science and research findings, e.g. the Zone of Proximal Development, scaffolding, edu122

JLLT Volume 8 (2017) Issue 1 cational science, descriptive and explanatory research, theories, hypotheses and scientific models, research design and methodology, experiments, and randomized controlled trials (RCTS) – to name but a few. In a nutshell, the MET passages (102-185) are of crucial importance and play a pivotal role in the teacher guide. They are “intended as a scaffold for practitioners” (5), based on experimental research (Hattie 2009 / 2012, Marzano 1998, and Wellenreuther 2004) and comparable to models of direct instruction. The MET, however, is different from other planning models turned into lesson plans in that it is meant to “help teachers to question teaching traditions and personal habits so that they can make informed decisions to the benefit of their students” (5). In order to prepare teachers for these “informed decisions”, the MET is embedded in the foundations of scientific methods following the principle to use research to improve practise. Accordingly, the reading audience for this teacher guide would be stretched across a wide field, from students, student teachers to practitioners and teacher trainers. It appears to be especially useful in the area of undergraduate- and graduate-student courses as well as teacher seminars and in-service teachers. An informed public with a special interest in educational research and current discussions about teaching standards and evaluation will find the features of scientific research of great value, whereas the chapters on the Model of Effective Teaching (MET, Chapters 6-11) will be useful for direct implementation in the classroom. Effective Teaching and Successful Learning should most certainly be available to the teaching community at large, whose learning from the book will be effective and whose teaching will be all the more successful. All in all, De Florio’s book more than lives up to Thomas Huxley’s verdict that “science is simply common sense at its very best” – as quoted at the book’s beginning – and is an apt teacher guide without showing the fallacies of teaching recipes detached from essential aims and objectives, initiation of competencies and educational values. It is highly recommended to adorn every teacher’s bookshelf.

References Lemov, Doug (2010). Teach like a champion: 49 techniques that put your students on the path to college. San Francisco, CA: Jossey-Bass. Hattie, John (2009). Visible learning. A synthesis of over 800 meta-analyses relating to achievement. London etc.: Routledge. Hattie, John (2012). Visible learning for teachers: Maximising impact on learning. London etc.: Routledge. Hunter, Madeline Cheek (1976): Improve Instruction. El Segundo, CA: TIP Publications. Engelmann, Siegfried & Carnine, Douglas (1982): Theory of instruction: Principles and applications. New York: Irvington Publishers. Marzano, Robert J. (1998). A theory-based meta-analysis of research on instruction. Aurora, CO: Mid-Continent Regional Educational Lab.

123

JLLT Volume 8 (2017) Issue 1

Reviewer: Dr. Bernd Klewitz University of Jena Ernst-Abbe-Platz 8 07743 Jena Germany E-mail: [email protected]

124

JLLT Volume 8 (2017) Issue 1

Guidelines for Contributors Please send your manuscripts to the editor at: [email protected] Manuscripts can be written in English, German, French. Spanish, or Italian. In view of academic globalisation, English articles are especially welcome. Every article should come with an abstract of around 10 lines. English articles should be accompanied by an abstract in one of the other languages mentioned.

• • • • • •

Length of the articles: 10 to 25 pages Font style: Arial, size 12pt. Text: spacing: At least, 1,3 pt Paragraph: No indent (Setting: Paragraph - Spacing - Auto) Citations: 10pt (indented), number as superscript. Headings:

◦ ◦ •



Second level (1.1, 2.1, etc.) and lower (1.1.1, 2.1.1, etc. : 12 pt, boldfaced

Tables and figures:

◦ ◦ •

First level (1., 2., 3., etc.): 14 pt.. bold-faced

Identify them by means of a caption Put a reference into the text.

Words and expressions taken from languages other than that of the article should be put in italics. Referencing: For referencing, please generally follow the Harvard Style.

Impressum Herausgeber: Prof. Dr. phil. Thomas Tinnefeld Dienstanschrift: Hochschule für Technik und Wirtschaft (HTW) des Campus Rotenbühl Saarlandes Waldhausweg 14 Fakultät für Wirtschaftswissenschaften 66123 Saarbrücken W3-Professur für Angewandte Sprachen E-Mail: [email protected]

Redaktion: Wiss. Beirat (vgl. Editorial Advisory Board, vordere Umschlaginnenseite); E-Mail: [email protected] Internet: http://sites.google.com/site/linguisticsandlanguageteaching/ Konzeption, Titelgestaltung und Layout: Thomas Tinnefeld © JLLT 2017 ISSN 2190-4677 Alle Rechte vorbehalten. All rights reserved.

JLLT Volume 8 (2017) Issue 1 JLLT Volume 8 (2017) Issue 1

Articles Randall Gess (Ottawa, Canada): Using Corpus Data in the Development of Second Language Oral Communicative Competence Siaw-Fong Chung (Taipei, Taiwan (R.O.C.)): A Corpus-Based Approach to Distinguishing the Near-Synonyms Listen and Hear Jennifer Wagner (Clio (MI), USA): A Frequency Analysis of Vocabulary Words in University Textbooks of French Norman Fewell & George MacLean (both Okinawa, Japan): Transforming Can-Do Frameworks in the L2 Classroom for Communication and Feedback Kay Cheng Soh & Limei Zhang (both Singapore): The Development and Validation of a Teacher Assessment Literacy Scale: A Trial Report

ISSN 2190-4677

€ 10,-

JLLT Volume 8 (2017) Issue 1

PDF format, the web page version of the text being kept. Completion of the ... Dr. Heinz-Helmut Lüger - Universität Koblenz-Landau, Germany. Prof. em.

1MB Sizes 0 Downloads 222 Views

Recommend Documents

JLLT 6 Volume (2015) Issue 1.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. JLLT 6 Volume ...

JLLT Volume 4 (2013) Issue 2.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. JLLT Volume 4 ...

DH Issue 8 Volume 18 April 2017.pdf
There was a problem loading this page. Retrying... Whoops! There was a problem loading this page. DH Issue 8 Volume 18 April 2017.pdf. DH Issue 8 Volume ...

Volume 2 - Issue 8.pdf
THE VICTORY SERIES ... Elohiym/Theos consist of Jah (Hebrew, YAHH, pronounced yä—Psalm 68:4), Jesus (the same as Joshua or ... Volume 2 - Issue 8.pdf.

volume 8, issue 2
online, electronic publication of general circulation to the scientific community. ... For a free subscription to The Behavior Analyst Today, send the webmaster an e-mail .... names and dosage and routes of administration of any drugs (particularly i

DH Issue 1 Volume 19 September 2017.pdf
Grove website or can register on ... sports cre- ate booths to sell food and mer- chandise as well as host games. ... DH Issue 1 Volume 19 September 2017.pdf.

June 2017 Update (Vol. 1 Issue 8)
Phone: 515-967-6631 x 2287. Twitter: @SEPtrack ... this past season. I am currently working on a project to hunt down as much historical data as I can find to.

Volume Issue 2017 EDITION III -
They include WWE wrestling superstar John Cena, ... funding and measuring the impact of the project ... let us evaluate the impact of our current programs so.

Volume 11, Issue 1 - February 2015
Mozambique, despite some new laws being introduced and institutions being ..... research project participant's right to privacy and the research community's .... Europe and Africa. I have promised that if elected chair, I would do my best to continue

Volume 52 - Issue 1 - FINAL.pdf
There was a problem loading this page. Whoops! There was a problem loading this page. Volume 52 - Issue 1 - FINAL.pdf. Volume 52 - Issue 1 - FINAL.pdf.

Volume 1 - Issue 6.pdf
speaks of Abraham and how he was justified by works. God. told him to sacrifice his son. He took God at His word (faith). and made every preparation. Abraham ...

Volume 2 - Issue 1.pdf
... therefore the Lord of the harvest, that he. will send forth labourers into his harvest. And when he had called unto. him his twelve disciples, he gave them power ...

Volume 1 - Issue 2.pdf
say that Heaven is above the. earth (I Kin. 8:23) in the highest. part of creation (Job 22:12; Luke. 2:14) and far above (Eph.1:21;. 4:10). It is located north of the.

VOLUME III Issue 1.pdf
day. Mrs. Nesbit identified the. following people in the 1908. photo: Front: Rosie Douglas. Tilford, Velma Reed Nesbit,. Johnny Becker, Webster. Bolin, Ella Sprinkles Fletcher,. Roscoe Star, Chris Barnes,. Lewis Barnes, Harry Douglas,. Cecil Douglas,

Newsletter Volume 1 Issue 1 April 2011 trifold inside final.pdf ...
... Mr. Horn if you have ideas for. the garden or ideas for further funding opportunities! Page 1 of 1. Newsletter Volume 1 Issue 1 April 2011 trifold inside final.pdf.

JLLT 5 (2014) 1.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. JLLT 5 (2014) 1.

PsycINFO News | Volume 32, Issue 1 | February 2013 - American ...
If you subscribe to PsycARTICLES via APA PsycNET, you can now access that content on the go! This winter we released APA Journals Pro, an app for iOS and.

PsycINFO News, Volume 30, Issue 1, March 2011 - American ...
continued on page 2. Volume 30 ... Every year we receive questions about what's on the horizon for APA's electronic ..... how information technology can enrich people's lives and help them .... degree with honors in Psychology and Philosophy.

PsycINFO News, Volume 28, Issue 1 - American Psychological ...
Jan 20, 2009 - Exporting content to social networking sites such as de.licio.us and digg will take as little as one click. Users will be able to view tables and ...

TGIF Volume 2 Issue 1.pdf
Page 1 of 1. TGIF Student Newsletter. “Thank Goodness It's Friday”. Nicholas Orem Middle School. Volume 2, Edition 1 Friday September 2, 2016. ESOL Student ...

Volume 22, Issue 1 Spring 2014
child care center for low-income families and your money makes a real impact on so many ... ...manage our budget so ... KDCCC has a PayPal account. Sending in ... KDCCC will receive a percentage of your purchases if your card is linked to.

DH Issue 5 Volume 18 January 2017.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. DH Issue 5 ...

DH Issue 6 Volume 18 February 2017.pdf
Feb 18, 2017 - There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.