Content introduction chapter-i assessing learner's writing skills according to cefr scales



Yüklə 113,49 Kb.
səhifə8/20
tarix03.05.2023
ölçüsü113,49 Kb.
#106646
1   ...   4   5   6   7   8   9   10   11   ...   20
Assessing learner\'s writing skills according to CEFR scales

Writing Task Development
The writing tasks and the associated rating scales were developed by teachers who were nominated by the Kultusministerkonferenz in 2005, representing the federal school system in Germany. The teachers received comprehensive training by an internationally renowned expert in the knowledge and skills of professional task development for standards-based language assessment over a period of 2 years for a total amount of 252 hr of training time. In addition, the item development process was reviewed by an international expert team, which met regularly in Berlin.
Because the CEFR and the NES constituted the larger framework of assessment, the model of writing used in these documents was taken as the starting point. According to the CEFR and NES, writing is a productive and interactive activity whereby authentic communicative tasks are carried out. Completing such tasks requires engagement in communicative language activities and the operation of meta-cognitive communication strategies (for an analysis of the writing construct in the CEFR, see Harsch, 2007). This model served as the backdrop for the development of test specifications in the Institut zur Qualitätsentwicklung im Bildungswesen project. Due to the perceived shortcomings of the CEFR to provide for test specifications, which we outlined in the literature review, the following procedural approach was implemented, based on North's (2004) recommendations. First, the NES descriptors as well as CEFR level descriptors for Levels A1 to C1 from 13 different CEFR scales relevant for writing (Council of Europe, 2001, pp. 61f, 83f, 110–118, 125) were collected. Appendix A illustrates selected CEFR writing descriptors for Level B1. All descriptors were then analyzed in terms of their content and terminology and subsequently condensed and refined to serve as a basis for developing a construct definition and test specifications. Specifically, redundant concepts were omitted; terminology was revised to derive at coherent descriptions of the specific purpose; vague or too general statements were revised to be more concrete; and writing purposes, text types, and communicative activities relevant in our context were supplemented. Appendix B illustrates this process for the development of the specific purpose at Level B1.
In addition to the analysis of the descriptors in the CEFR and NES, the task specifications also drew on research on writing in English as a foreign language, whose key results are sketched out only coarsely in the CEFR. For example, Hayes and Flower (1980) identified key recursive processes in writing thereby shifting the traditional focus on form toward a focus on writing as a social action. Recent research similarly suggests that it is also necessary to consider relevant facets such as task environment, individual factors, and the social context of the writing activity when developing writing tasks and rating student responses.
As a consequence, the writing purpose, the addressees of the text, and the social context of the writing activity were varied systematically across tasks and clearly stated in the task prompts. In alignment with the level-specific approach, task demands, expected content, structural and linguistic features as well as the time allotted to answer a task were systematically varied across the targeted CEFR levels. In other words, the complexity of the required speech acts; text types; linguistic structures; and the organizational, strategic, and cognitive expectations increased consistently for tasks from Levels A1 to C1. A similar approach is described for the Cambridge ESOL suite.
One practical implication of the level-specific approach is that tasks are much more constrained at lower CEFR levels and much more open at higher CEFR levels. At Level A1, for example, a task might consist of filling in a simple information sheet at a hotel reception and would not require organizing a text into coherent paragraphs. Thus, students are not able to display higher levels of proficiency for such a task even if they are highly proficient. In contrast, tasks at higher CEFR levels such as B2 require more elaborated responses such as writing an opinion on a topic of general interest. Successful student responses at this level include evidence of various methodological competencies such as the logical development of ideas and the coherent structuring of these ideas in a longer text.
To aid the task developers considering these design characteristics during the development process, they were asked to classify their tasks according to a variety of relevant criteria for language test development adapted from the Dutch Grid as well as the CEFR Grid for Writing Tasks To address the ambiguous terminology previously mentioned in the literature review, the developers worked in teams and based the classification on consensus ratings, using templates created for this purpose. A total of 86 prototype writing tasks were created, out of which 19 were included in the study of this article. All tasks were a priori classified in terms of their targeted CEFR level by the task developers, again based on consensus ratings. Table 3 summarizes the task specifications in terms of the level-specific purposes, text types, task characteristics, textual, and linguistic expectations for three of the five CEFR levels. These specifications result from two workshops with all developers to train interpretation and application.

Yüklə 113,49 Kb.

Dostları ilə paylaş:
1   ...   4   5   6   7   8   9   10   11   ...   20




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin