Content introduction chapter-i assessing learner's writing skills according to cefr scales


Chapter-I Assessing learner's writing skills according to CEFR scales



Yüklə 113,49 Kb.
səhifə2/20
tarix03.05.2023
ölçüsü113,49 Kb.
#106646
1   2   3   4   5   6   7   8   9   ...   20
Assessing learner\'s writing skills according to CEFR scales

Chapter-I Assessing learner's writing skills according to CEFR scales
1.1 Test Development and Setting Cut-Scores in Line With the CEFR
The overall aim of the CEFR as a tool of the Council of Europe language policy is to stipulate reflection, communication, and discussion amongst practitioners in the fields of language learning, teaching and assessment.
To achieve these aims, the CEFR provides a descriptive scheme of activities, skills, knowledge, and quality of language “in order to use a language for communication … effectively” These categories are defined by proficiency level descriptors in six levels of emerging communicative language ability, ranging from the basic user stage (Levels A1 and A2) via the independent user stage (Levels B1 and B2) to the proficient user stage (Levels C1 and C2).
Even though the CEFR has become a key reference document in the area of language tests and examinations, best practices for its use are heavily debated amongst scholars. Of importance, the CEFR is a “descriptive framework, not a set of suggestions, recommendations, or guidelines”. Some researchers, such as North (2004), regard the framework as a “practical, accessible tool that can be used to relate course, assessment, and examination content to the CEF categories and levels” North suggested “studying relevant CEF scales, stating what is and what is not assessed, and what level of proficiency is expected” (p. 78) as a basis for relating examinations to the CEFR. Other researchers, however, take a more critical stance. Weir (2005), for instance, came to the conclusion that “the CEFR is not sufficiently comprehensive, coherent or transparent for uncritical use in language testing” (p. 281), as it does not “address at different levels of proficiency the components of validity” (p. 284), such as describing contextual variables or defining theory-based language processes. Weir's contentions are to a certain degree in line with the outcomes of the Dutch CEFR construct project , which reports shortcomings in the CEFR as a descriptive instrument. To overcome some limitations of the CEFR for test development practices, this group developed the Dutch Grid, a task classification scheme designed to assist in the characterization of reading and listening comprehension tests. In the context of using the CEFR for assessing writing, Harsch (2007) came to the conclusion that the CEFR scales are too coarse, vague, and at times incoherent to be directly used for the development of writing tasks or rating scales. To overcome the specific limitations of the CEFR for developing writing assessment instruments, a grid for characterizing writing tasks, the CEFR Grid for the Analysis of Writing Tasks, was developed by members of the Association of Language Testers in Europe on behalf of the Council of Europe (2008). Its main aim is to “analyse test task content and other attributes, facilitating comparison and review” to facilitate the “specification” stage when aligning tests to the CEFR.
The procedure of aligning examinations to the CEFR is a complex endeavor. Its core component is known as standard-setting (e.g., Cizek, 2001; and it involves setting cut-scores on the examination's proficiency scale in correspondence to the CEFR levels. Due to the growing importance of setting defensible cut-scores for reporting purposes in Europe, the Council of Europe (2009) has published the Manual for Relating Language Examinations to the CEFR, based on an earlier pilot version (Council of Europe, 2003). The Manual provides a comprehensive overview over basic considerations and possible steps to align language examinations; it is accompanied by several reference supplements addressing more technical issues. As far as the alignment of writing tasks is concerned, the Manual suggests specifying the tasks by using, for instance, the aforementioned Grid, followed by formal standard-setting methods. In the field of writing tasks, examinee-centered standard-setting methods are suggested, using examinees' responses to align the writing test to the CEFR levels (Council of Europe, 2009, p. 44ff). If one, however, wants to link the writing tasks themselves directly to the CEFR levels, in accordance with test-centered standard-setting methods usually chosen to link tests targeting reading or listening comprehension to the CEFR levels, the Manual does not suggest a suitable test-centered standard-setting method for writing tests. We therefore investigate the feasibility of applying a test-centered standard-setting approach to align level-specific writing tasks directly to their targeted CEFR levels. Although this article does not focus on the actual formal standard-setting procedure, its aim is to explore how far the results of our analyses can contribute toward underpinning the alignment of writing tasks to the CEFR with empirically grounded cut-scores that can be used as a supplementary data source for standard-setting procedures.

Yüklə 113,49 Kb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   20




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin