w

4PI

do/ uiurgw ............ms. mmqs

90,

6

~iqswnnaqa.................iiuwria.........o i q i o i i ....................

dm/ uiurgw ..........W P I . ~nunku ~. awzniqmri ..............iiuwia.....[.daofiifiwtqisG........... 6 rSP

dm/ u i u r ~ w ...........m?. I]MUIII d

0

WWI~U w

d

6

U ~ ~ ~............................... PIO hurrria.........aiqiw .................... 6

9

...................fi~ua"dinim~ m z ~ a d ~ ~ i......................................................... fims d

w

#aslani9d%y~bl,a%d~i~~ "Language Proficiency Testing in the Less Commonly Taught Languages" m h u ? uABuCC$LIR R?U&

d i k nT4il~MW.1 '' IUCU?! 17-18 ~ ~ V I A2555 U

1. Justifying the Use of Language Assessment

2. Language Test Development: the Secret of Success

3. Testing Vocabulary Size in an Uncommonly Taught Language

4. Testing Listening Comprehension 5. Reading Activities in a Face-to-Face Interaction Part of the Brazilian Proficiency Certificate in Portuguese as a Foreigh Language (Celpe-Bras)

mrmazsZan Topic 1: Justifying the Use of Language Assessment CmU Professor Lyle F. Bachman, University of California, Los Angeles

The following topics'are discussed in his presentation: Genesis of our approach Limitations of other approaches Basic premises of our approach Use of language assessments Accountability Assessment use argument (AUA) Qualities of claims in an AUA

In sum, Professor Bachman emphasized that test developers and test users(decision makers) must take into account stakeholders' uses of their tests.

He introduced a new

approach to language assessment that is called An Assessment Use Argument. It consists of four main parts: Claims: statements about our intended interpretations and uses of test performance that include an outcome and one or more qualities claimed for the outcome. Warrants: statements justifying the claims Data: information on which the claim is based Backing: the evidence that we need to collect to support the claims and warrants in the AUA. Bachman concluded that the AUA can provide an explicit rationale and conceptual framework for justifying the inferential links between assessment performance and assessment, use.

In other words, it provides the conceptual framework for linking our

intended consequences and decisions to the test taker's performance. An Assessment Use Argument also provides a guide for designing and developing language assessments. It also guides the collection of backing (evidence) in support of warrants and claims of the assessment use argument.

Topic 2: Language Test Development: the Secret of Success

bfiI Professor Gary Buck, Lidget Green, Inc. This presentation mainly focused on how to develop a quality test step by step. By providing practical suggestions, this topic is very useful especially for new teachers or those who have less experience in test development. The followings are 9 steps in designing a test. It is important to be noted that if missing one of these stages, the result will usually be a significant loss of quality. Step 1: Clarify the purpose and assess the resources First, determine what decisions need to be made based on the test results: placement. achievement, general proficiency, etc. Second, consider the resources available, both to construct the tests, and then to administer and score it once it is in use. For example, think about: who are the people who can work on

this? Is there enough time? Do you need money? Do you have the expertise? What about pilot test takers? Also, consider any constraints or other givens; e.g. It must be machine scoreable, the Dean hates multiple choice, and we have no one who can score essays. Step 2: Define the construct

This means we need to define the thing we need to measure. The construct will vary depending on the purpose of the test i.e. achievement, proficiency, diagnose, job selection, etc. It is recommended to think in terms of: the ability to do x, knowledge of x, y, and z and skills necessary to do y. Step 3: Write the test specifications

The design for the test should be written up in an explicit document. This should describe the construct the test is intended to measure, and how to do that. Also think of this as a recipe. Could someone else build the test from this recipe? The more high stakes the test, the more important the specifications are, and the more detailed theit need to be. But even in low stakes tests this is very useful. Specifications should be of two sorts: Test Design Specifications that describe the whole test i.e. type of items, number of items, and how the test will be scdred. Item Specifications for each item.type used on the test i.e, instructions on how to write and score each type of item. Step 4: Create the tasks

To elaborate and .have clearer picture of how to create the tasks, the followings are some suggestions in designing Listening and Reading Comprehension Items Test important information in the text includes. something the writer/ speaker thought was important something the writer1speaker wanted the reader1 listener to understand: the main point is always a good choice, supporting ideas, explicit facts, author intended inferences, paraphrases of an important idea and summaries of a short piece of text.

Step 5: Assemble a draft test - the pilot tests This is easy to do if the test specifications are clearly written out. There are three main things that should bear in mind. test more items than you think you will need pilot tests don't need to follow the Test Design make pilot tests according to what is convenient Step 6: Pilot the draft test All tests need to be piloted. This means that the items should be tried out on a suitable group of test takers and should be similar to the target test takers as possible. The bigger the sample, the better it is. At the very least, some colleagues can be asked to take the test. Step 7: Analyze the results Item analysis should be carried out to ensure test quality. Classical item analysis is fairly easy and works for smaller samplis. Rasch is better for important tests. In doing item analysis, item difficulty and item discrimination should be conducted.

Piloting and discarding,poor items is the secret of success. Step 8: Build the final test forms Only the best items should be used. If possible, easy items should be put at the beginning and harder items later. But for passage comprehension items, the items should be ordered acco~dingto the content of the passage. Step 9: Set the passing scores, if necessary (C

If necessary, the passing score should be decided with some explanation about scores interpretation added. For important tests, the following needs to be done:

* Convert the scores to a reporting scale * Equate different versions of the test, so the scores become quivalent * Set performance standards Buck concluded that making bad tests is very easy, and making good tests is not easy, but with knowledge, time and care most teachers can do a fairly good job. All test need to be reliable and valid. These are serious technical issues, often beyond the scope of busy classroom teachers. But if following the suggested processes, it's likely teachers could do their pretty good job.

Topic 3: Testing Vocabulary Size in an Uncommonly Taught Language bfl Paul Nation

LALS, Victoria University of Wellington, New Zealand In his presentation, Nation stated that for learners of English, the goal of around 8,000 world families is an important one for learners who wish to deal with a range of unsimplified spoken and written texts. It is thus helpful to know how close learners are to this critical goal. However, we cannot generalize from these vocabulary size figures to other languages. He also suggested the steps of how to make a vocabulary size test as follows:

Decide on the kind of vocabulary knowledge that you want to measure and write a description of this kind of knowledge and why it is important to measure it. Bearing in mind your decision in step one, choose or develop a corpus that represents the kind of language uSe that is the goal of the learners you want to test. This corpus needs to be large enough and representative enough to make sure that all the worlds that are part of the test-takers' knowledge occur within the corpus. (Leech, Rayson, & Wilson, 2001). Bearing in mind your decision at step 1, decide on the unit of counting that you are I

going to use when making a word list from a corpus. Make a ranked word list from the corpus. The ranking should take account of range, frequency and dispersion. The freely available word counting program AntWordProfiler (http://www.antlab.sci.waseda.ac.jp/antworldprofiller index.html) can

this for a very wide range of languages using Unicode files. Check your word list against other corpora to make sure that it does not contain serious omissions and errors. Divide your word list into levels based on range, frequency and dispersion, Randomly select a suitable number of words from each level, so that the total number of words in your test is large enough to obtain a good level of reliability and small enough to be able to be tested within a sensible testing time. In the case of the Vocabulary Size Test (Beglar, 2010; Nation & Beglar, 2007), which is a multiple-choice test, 70 items were sufficient to obtain a good level of reliability for the whole test, and

a test of 100 items could be administered in a computer-based form in just under half an hour. 9

Bearing in mind your decisions at step 1, decide on a test-item type.

10

Thoroughly check, trial, and gather data on the performance of the test and the test items.

Topic 4: Testing Listening Comprehension

htlr Professor Gary Buck, Lidget Green, Inc. Testing listening comprehension is a complex undertaking, however, some basic information can be taken into account to begin designing listening comprehension test. According to Buck (2012), four aspects must be considered in designing listening comprehension test are as follows:

%

1. The Listening Comprehension Cohstruct

2. Texts for Testing Listening Comprehension

3. writ;ng Listening Comprehension Items 4. Playing the Recording Once or Twice

4.1 The Listening Comprehension Construct

Before considering what listenipg constructs to be assessed, it is important to consider the nature of spoken language, when making language tests, the first place to start w

is the language. Processing spoken texts requires skills not required when processing written texts. Linguistically, spoken language differs considerably from written language. There are two main functions of spoken language:lnteractional - to maintain social relationships: phatic, small talk, the personal contact is the motivation f ~ the r message and transactional - to transmit propositional information: the content is the motivation for the message. People virtually never intentionally listen to interactional language: to do so is boring and pointless. It is also important to consider the oral-literate continuum, which are the differences between listening and reading are really a question of degree. So texts can be ranged along an oralliteratecontinuum: Oral texts: more characteristics of spoken language; associated with casual

conversation

Literate texts: more the characteristics of written' language, especially, expository prose

4.2 Texts for Testing Listening Comprehension

Choosing the listening text for listening comprehension test is very crucial. Bucksuggests that the characteristics of good texts for testing listening are as follows: They should be well recorded and clear with limited noise They should have the linguistic characteristics of real-world equivalent texts They should be structured so they support good items And good texts for writing comprehension questions should: Have a main point, topic or gist Have coherent connections between ideas Have an obvious reason for what is being said Provide a clear context

4.3 Writing Listening Comprehension Items! The listening passage, the text, is the stimulus for the item, Items target specific information in the stimulus. There are many ways to determine what information to target, but how you write your items ultimately depends on what you want to test.There are t w t main approaches to writing comprehension items: Text-Based ltem Writing. Generally, this .means targeting the main points 'of the text. That is, what the speaker intends the listener to understand.

The second approach is Functionally-Based ltem Writing, An alternative

approach to writing items is to use a set of skills, functions or proficiency descriptors. The most common of these are the Common European Framework of Reference (CEFR), and the ACTFLIILR descriptors.

ltem Types There are a number of item types that allow for testing listening skills. The type of task used will be determined by the test takers' ability level, the constraints of the testing situation, and the best fit for the text-type used. Typical examples are:

4.4 Playing the Recording Once or Twice There is an ongoing debate whether to play the listening text once or twice. There are strong arguments in favor of each.Although there are some very good reasons to play the text twice, and many reputable testing programs do so, the balance of the argument is clear. All other things being equal, we will get better information and more of it in the same time, if we play the texts only once.However, we do need to take care when creating our tests. Playing the text only once does have dangers, and test developers need to take account of those when developing their tests.

Topic 5:

Reading Activities in a Face-to-Face Interaction part of the Brazilian Proficiency Certificate in Portuguese as a Foreign Language (Celpe-Bras)

bfI Laura MaiciaLuiza Ferreira

Portuguese Language Section, Faculty of Arts, Chulalongkorn University This presentation was about the reading activities proposed in the question scripts which from'the Oral Proficiency Interview part of the Celpe-Bras examination (Proficiency Certificate in Portuguese as a Foreign Language), issued by the Brazilian Government. The study was designed based on the theoretical framework about reading in a foreign language by many experts, including (Widdowson 1978: 1991; Dell'lsola 1991; Celce-Murcia e Olstain 2000; Raciland 2005; Grabe&Stoller 200.2; D,ias 2009; Grabe 2009. Thjs face-to-face interview test aimed to assess the candidates' oral performance and this study looked into the scripts r

used by the technical committee of the exams, collected from the first semester of the 2004, 2007, and 2010 tests.

.

First, the reading materials were categorized into different genres based on the framework of Askehave and Swale's (2001) textual genre classification methodology, and then the reading materials were analyzed to find out the most frequently used genre along the three year data. After that, in order to verify reading skills that are targeted by the question scripts, the checklist develop by Dias (2009) was employed. These skills are discussed by genre and year of application in order to find out which reading abilities are underlined by the questions related to different genres such as advertisement, newspaper, and magazine articles and if the reading abilities changed from one year to another. It was also considered the possibilities of the difference between the results of this study and the information

provided for the applicants in the Applicant's manual regarding how each applicant can prepared oneself for the test. The result indicated that the majority reading material was collected from the Brazilian press and 83.3% of the samples used were made of clipping report. Regarding the reading skills, the study shows that the question scripts mainly encourage the use of prior knowledge of the candidates and present diversified comprehension activities, which help the readers, construct and reconstruct the meaning of original texts. Although the aim of the face to face oral interaction is to certify the oral proficiency of candidates for Celpe-Bras, the reading skills demanded in this stage of evaluation are essential for the candidate to express themselves orally and to be able to prove their ability to interact in Portuguese. It was also verified, by analyzing the question scripts, that the reading activities involve discursive and textual aspects of the texts proposed for reading during the face to face oral interaction.

4 ' ~sar uassuk W

I.

Y

WP,

pr

~ i n n i o b $ i i a r r d o z q u w ~ ~ ~ " ~ ~ ~yneioaudoqu!hhufbb~zm?z~un~~"ua~ao io~u~~~~ o r 4

I

2

% ~ i o o r i w u g i u l u n i o a f i ~ ~ ~ ~ ~ woau'itnianojnqa m a a u ~ i ~ ~ i maamquda~aoozTqlunit a h ~ u u u n m a a u n l w k n q a(proficiency tests)'lu$nazdiq d(iiadir~udozfa"II~aii~uinda

.

a ~ ~ ~ o G ~ a o u o ~ a 3 a i n i ~ i ~ ~~knkqaai u i o o d onfl#~aiu~dlunioas"~~~~uuwm~au z~ English Achievement Tests

Ih

pr$ p r 2 or d ( l i n l ~ ~ ! d ! h ~ ~ ~ ~ n 1 ~ b ~ ~ b b ~ $ R a 7 ~ ~ ~ ! ~ ~

ff

@WU~Q Forum ~ rr?abflu workshop daW 2.

b d a d a ~ ~ ~ f l j ~ ~ ~ ~ k % a ~ a ~ a a ~ vml zl d4a~ud~nmi a amu~ ~ h r r i m o ~ 7biuai~?~d uuin~~~u

!bi91nnio$aurloquhnioa~i~~aaaulu~nazdi~ aiuiondiuid~i~~unioa$i~bb~~~maau nir;jbmsiz99"$~aaus~~4nio
dl

d

ldoui~m~aoii~~~~~n~ui~.~iu~i~hiunio~ma~~

~fiduozuuu~ndciu

ประชุมวิชาการ Language Proficiency Testing in the Less Commonly ...

ประชุมวิชาการ Language Proficiency Testing in the Less Commonly Taught Languages.pdf. ประชุมวิชาการ Language Proficiency Testing in the Less Commonly ...

563KB Sizes 0 Downloads 34 Views

Recommend Documents

NYU Language Proficiency Testing policy 10-11.pdf
NYU Language Proficiency Testing policy 10-11.pdf. NYU Language Proficiency Testing policy 10-11.pdf. Open. Extract. Open with. Sign In. Main menu.

What is Academic Language Proficiency?
form of pictures, discussion, and easier reading helps make texts ... when data can be interpreted in different ways (for evidence .... everyone. We hypothesize that visualization and ... knowledge and acquiring language are the tools necessary.

Language LT Our Commonly Confused Words List.pdf
Language LT Our Commonly Confused Words List.pdf. Language LT Our Commonly Confused Words List.pdf. Open. Extract. Open with. Sign In. Main menu.

The Proficiency Cohort - Language Educactor - 2016.pdf ...
By Catherine Ritz. The Language .... what typical homes look like, imagining what the family you live .... The Proficiency Cohort - Language Educactor - 2016.pdf.

Fundamental Concepts in Language Testing (3) Characteristics of ...
Introduction. In previous sections, two of the fundamental concepts in language testing – functions and forms – were discussed. In this section, the third ...

Software for English Language Proficiency Training at SPAMAST-CAS ...
Retrying... Software for English Language Proficiency Training at SPAMAST-CAS, Matti, Digos City.pdf. Software for English Language Proficiency Training at ...

Conduct of English Proficiency Test for Teacher Applicants in the ...
Division/District Supe rvisors/Coordinators ... Conduct of English Proficiency Test for Teacher Appli ... y, Junior and Senior High School For SY 2018-2019.pdf.

commonly confused words.pdf
Loading… Page 1. Whoops! There was a problem loading more pages. commonly confused words.pdf. commonly confused words.pdf. Open. Extract. Open with.

commonly confused words in english pdf
commonly confused words in english pdf. commonly confused words in english pdf. Open. Extract. Open with. Sign In. Main menu.

Detection of Lead Content of Fishes Commonly Sold in General ...
immature or juvenile (20-30 cm) immature or juvenile (20 30 cm) ... Bureau of Fisheries and Aquatic Resources. (BFAR) and World Health Organization (WHO) ...

commonly used words in IELTS Listening Test.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. commonly used ...

The Forgotten Proficiency (Silent reading Stamina) - TextProject
beyond one minute, unlike students in the 1992 sample). .... playing a Rachmaninoff piano concerto) or prosaic (e.g., riding a bike or using a computer .... Based on the randomized, year-long experiment, Reutzel et al. concluded that there .... histo

PDF The Midwife s Pocket Formulary: Commonly ...
Download The Midwife s Pocket Formulary: Commonly prescribed drugs for mother and child, drugs and breastfeeding, contra indications and side effects, 2e, ...

eBook Testing for Language Teachers Arthur Hughes.pdf
eBook Testing for Language Teachers Arthur Hughes.pdf. eBook Testing for Language Teachers Arthur Hughes.pdf. Open. Extract. Open with. Sign In.

eBook Testing for Language Teachers Arthur Hughes.pdf
Urdu Books, English Books and Old pdf books download. Page 3 of 93. eBook Testing for Language Teachers Arthur Hughes.pdf. eBook Testing for Language ...

Proficiency for Parents.pdf
Get involved yourself at the school level. Ask your. student's teacher how you can best support the. program! § Communicate regularly (in writing) with your.

Partitivity in natural language
partitivity in Zamparelli's analysis to which I turn presently. Zamparelli's analysis of partitives takes of to be the residue operator. (Re') which is defined as follows:.

CERTIFICATE IN ARABIC LANGUAGE
Translate the following Arabic sentences, related to "conversation between father and son, into Arabic. Li. L.4›: ci.At : (al-abb : hal t'rif ma wajibuna naHwa ...