1st supervisor: Dr. W. R. van Joolingen 2nd supervisor: Dr. P. Wilhelm University of Twente Faculty of behavioral science Enschede – June, 2007 Foreword



Yüklə 258,88 Kb.
səhifə1/5
tarix04.05.2017
ölçüsü258,88 Kb.
#16498
  1   2   3   4   5



Lost in abstraction:

How analogy-mapping as a procedure for cognitive support might benefit mental modeling of a real-world target constructs when using a simulation-based source construct.
Master’s Thesis

Chris Wanrooij

Student no.: 0046299




Graduation Committee:

1st supervisor: Dr. W.R. van Joolingen

2nd supervisor: Dr. P. Wilhelm

University of Twente

Faculty of behavioral science

Enschede – June, 2007

Foreword
Working on my Master’s thesis has been a valuable learning experience. I would like to thank Wouter van Joolingen and Pascal Wilhelm for being inspiring and kind mentors. I would like to thank Wouter in particular for allowing me to be creative in setting up the research leading up to this thesis, while remaining a guiding, supportive, and patient supervisor at the same time. My last thanks goes out to Koen Verhoeven, Arie van Noordwijk, Bas Ibeling and Jacques van Alpen at the NIOO-KNAW for the time and effort they invested in expert-reviewing my translation of the Conceptual Inventory of Natural Selection.

ABSTRACT:

Based on research on analogy based reasoning in cognitive psychology, computer science and A.I., and the computational and cognitive models derived from this research, this paper proposes a cognitive support procedure for helping students map analogous concepts and relations between real-world target principles and analogous computer-based simulations onto each other. Three assumptions drive this research: 1) That computer based simulations are analogies of reality; 2) that in at least some cases students should be made aware of the limited correspondence between reality and simulation for the benefit of accurate mental modeling; and 3) that encouraging students to reason about this limited correspondence by means of analogy can facilitate accurate mental modeling, especially since 4) analogy based reasoning is thought to be a very natural and intelligent practice. Since analogy based reasoning is thought to be a vital learning mechanism in adults and children, analogy mapping is viewed as an inherently natural process and, therefore, hypothesized to be an effective educational practice. An analogy mapping tool or procedure might help students understand an analogy, its shortcomings in explaining a target principle, and may also help prevent students from making undetectable erroneous inferences. Finally, because this paper deals specifically with teaching the evolutionary principle of natural selection, analogy mapping was thought to provide yet another advantage: To help discover specific preconceptions that lead students to reject the concept under study, and provide a means to get around these preconception or at least pinpoint them so that they can be appropriately addressed. The effects of analogy mapping were compared to those of concept mapping. No statistically significant advantages of analogy mapping over simple concept mappings were found, however. Some conclusions on teaching natural selection and evolution are drawn.





  1. Introduction

In his book ‘The Blind Watchmaker’, Dawkins tries to clarify that the evolution is the process of “non-random survival of randomly varying replicators” (Dawkins, 1996). To make this process of cumulative selection more understandable, Dawkins conceived and detailed a computer simulation that enables successive generations of computer generated ‘creatures’ to randomly mutate. Subsequently the simulation allows its user to non-randomly select specific creatures for reproduction. Dawkins work has received a fair amount of critique, mainly based on arguments from a creationist or ‘intelligent design’ stance. Some of these critiques, perhaps oddly enough, consisted of pointing out that the aforementioned simulation had several shortcomings as a truthful representation of reality. What these critics apparently failed to appreciate was that instructional computer simulations such as that by Dawkins are always abstractions of reality; that they serve as analogies. Perhaps, then, some people have to be made aware of the analogous and abstract nature of such simulations, or they may fail to perceive its function and fail to perceive how such a simulation explains reality. This leads to a central idea behind this study: If simulations mostly serve as analogies, and if clarifying this to people can be important for developing vital understanding, then perhaps supporting or scaffolding the procedure of perceiving analogies (e.g. the relation between reality and a simulation) can be an important teaching strategy.

There is much pedagogical promise to computer simulation use in science teaching (e.g. Rivers & Vockell, 1987; Van Joolingen & De Jong, 1998; Perkins et al., 2006). Computer simulations can be important learning tools for reasons, among others, that they can be interactive, safe, cheap, readily available, reproducible, and scaleable in terms of time and space. All these things make simulations such that they are perfect candidates for achieving self-directed, constructivist, scientific discovery learning in students (De Jong & Van Joolingen, 1998).

Simulations serve to provide students with insights into the nature and/or workings of particular, often complex concepts. Abstraction and simplification are key to achieving this. Indeed, as Van Joolingen and De Jong (1991) contend, a simulation is always based on a model that is a filtered representation of a certain real system. This model is designed to be an analogy of the system being taught, and this point is crucial to the current study: Computer simulations are, at their very best, analogies of real systems.

This paper proposes that having students create adjacent concept maps of the two constructs that comprise a complex analogy will scaffold the analogy perceiving process, and thus not only help students obtain a better view on how source and target relate to each other, but also on where one analogous construct fails to correspond with the other. It is hypothesized that having students actively construct corresponding mental models of analogous constructs (e.g. simulations and reality) will help students see both the strengths and weaknesses of the analogical relations between them, and will thus result in engaging and deeper modes of thought and better understanding of the target construct (e.g. natural selection). It might not always seem beneficial to explain the differences between the two parts of an analogy, but in some cases it may be: Though both have waves, light is hardly like water.
The main research question here is:



  • Can the mental process of mapping an analogy (in this case, in the form of a computer simulation) be scaffolded such that understanding of the relation between simulation and real-world is increased, thus increasing overall understanding of the real world domain?

Because the scientific domain – i.e. target domain - chosen for this study is natural selection, and the source construct will be a simulation of natural selection, the main hypothesis for this study is:




  • That engaging in analogy mapping as cognitive support procedure will enhance students’ understanding of natural selection as defined in this paper.


Analogy: The core of cognition?

Douglas R. Hofstadter views analogy as the core of cognition (Hofstadter, 2001). Though such a stance is an extreme one, it is most certainly true that in people’s everyday learning, thinking and reasoning, analogical thought often plays a vital part. (e.g. Holyoak & Thagard, 1997). In fact, if perceiving familiarities in novel situations were not “natural and unavoidable”, all experience would be totally new and strange (Wong, 1993). Not surprisingly, the promise of analogy based reasoning has not been lost on teaching practice. In trying to teach students difficult concepts, analogous examples often prove useful cognitive tools (e.g, Glynn & Takahashi, 1998; Hofstadter, 2001; Kokinov & French, 2002; Kurtz, Miao, & Gentner, 2001; Treagust, Harrison & Venville, 1998). In scientific thought, analogies can play an important role in the development and acquisition of new concepts and ideas (e.g. Paris & Glynn, 2004; Stavy & Tirosh, 1993). Treagust et al. (1998) sum it up well when they say that “an effective way to deal with th[e] problem [of learning difficult and unfamiliar concepts in biology, chemistry, and physics] is for the teacher to provide an analogical bridge between the unfamiliar concept and the knowledge which students possess”(p.86). Indeed, the concept of analogy based reasoning as a natural cognitive ability has gathered so much credit that is has garnered its own neural theory of analogical insight (Lawson & Lawson, 1993).


Learning and reasoning by means of analogy

Much research has gone into the specifics of learning and reasoning by means of analogy, and the consensus is that the ability to learn and reason in this fashion is one of the key aspects of complex human thought. There is, however, considerable debate on how analogies come to be, and how they can be most appropriately modelled into a cognitive framework. Within the scientific research area concerning itself with dissecting the cognition of analogy making, a rough distinction can be made between two approaches to modelling analogy: High-level perception (HLP) (Chalmers, French, & Hofstadter, 1992) and Structure mapping theory (SMT) (e.g. Gentner, 1983; Falkenhainer, Forbus, & Gentner, 1989).


Structure mapping theory

SMT is a theory on analogy perception initially constructed by Dedre Gentner (e.g. Gentner, 1983; Markman, & Gentner, 1997). The idea is that, when constructing analogies, people map knowledge from one domain (the source) to another domain (the target), conveying the idea that a system of relations that holds among objects in the source, also holds among objects in the target. According to Gentner (1983) analogical insight is equal to seeing common relations between source and target, without much regard for the context and objects in which those relations are embedded. In SMT, a ‘good’ analogy is one that satisfies the constraints of parallel connectivity and one-to-one mapping (e.g. Gentner, 1983,1989; Halford, 1993; Holyoak & Thagard, 1989; Medin, Goldstone, & Gentner, 1993). Parallel connectivity requires that if two predicates are matched, that their arguments must match as well. For example, if [revolves around (earth, sun)] is matched to [revolves around (electron, nucleus)], then earth must match electron and sun must match nucleus. Additionally, one-to-one mapping demands that that a single source element be mapped to, at most, one target element.

Apart from the structure rules of parallel connectivity and one-to-one mapping, SMT is also based on the principle of systematicity (e.g Clement & Gentner, 1991, Gentner, 1980, 1983; Kurtz, Miao and Gentner, 2001), which states that knowledge embedded in coherent structures of meaning has preference over isolated facts. In other words, that people have a tacit preference for coherence and deductive power when interpreting an analogy. An important requirement for structure mapping analogies is that mappings are logically consistent, meaning, for instance, that mapping not only occurs between objects, but also between the relations between those objects and so on. Generally speaking, structure mapping is viewed as a mechanism by which much of experiential learning takes place; through implicit comparisons between a person's knowledge structures at a given time, and through aligning experiential knowledge with previous knowledge, or knowledge gained through instruction.
High-level perception

According to HLP (Chalmers, French, & Hofstadter, 1992), analogy making starts with low-level perception consisting of early processing of sensory input. Consequently, high-level perceptions involve extracting meaning from low level perceptions and making sense of this information at a conceptual level. This sense-making ranges from object recognition to interpreting and understanding complex situations. HLP deals with the problem of mental representation. In order for low-level perceptions to be organized into a meaningful whole, this information must be filtered and organized, thus leading to a structured representation. HLP, as opposed to SMT, deals with the issue of how mental representations are formed in the first place. The general HLP stance is that analogy making is part of high level perception, and that it is deeply interwoven with other cognitive processes. The main critique of HLP on SMT’s interpretations of analogy making is that the structure models by the likes of Gentner ignore the role of perceptual processes (Chalmers, French, & Hofstadter, 1992). As Hofstadter (2001) has put it:


“One should not think of analogy-making as a special variety of reasoning (as in the dull and uninspiring phrase “analogical reasoning and problem-solving,” a long-standing cliché in the cognitive-science world), for that is to do analogy a terrible disservice. After all, reasoning and problem-solving have (at least I dearly hope!) been at long last recognized as lying far indeed from the core of human thought. If analogy were merely a special variety of something that in itself lies way out on the peripheries, then it would be but an itty-bitty blip in the broad blue sky of cognition. To me, however, analogy is anything but a bitty blip — rather, it’s the very blue that fills the whole sky of cognition — analogy is everything, or very nearly so, in my view” (p. 1).
Which of the models of analogy is most appropriate?

There has been considerable debate on which of the above conceptualizations of analogy making is most accurate and appropriate (e.g. Chalmers, French, & Hofstadter, 1992; Forbus, Gentner, Markman, & Ferguson, 1998), but there seems to also be a careful mitigation that the two groups are actually trying to model different aspects of analogy (Morisson & Dietrich, 1995). Then again, this mitigation has received its own critique (Desai, 1997), so the disaccord is certainly not over. The most crucial difference between HLP and SMT, at least in the context of the current study, is that HLP is concerned with analogy making. That is, it tries to describe how an analogy is built up from and around initial low-level perceptions, meaning making, and other cognitive processes. SMT on the other hand, is more concerned with analogy understanding, meaning that it describes how an analogy between two ready made constructs is arrived at, i.e., how source and target are logically and structurally related, and how a person may perceive this.

SMT’s focus on perceiving and understanding ready made or presented analogies is for the purpose of this report more relevant than HLP’s focus on how a completely new analogy comes to be in the mind of the reasoner, or which low-level mental processes occur are at its’ base.
Analogy mapping as a procedure for cognitive support

Analogy mapping is a term conceived for the purpose of the current study, but it is not a new term. In cognitive psychology and within the procedure of analogy based reasoning, the activity of analogy mapping refers to the cognitive procedure of relating the source domain, to the typically less understood target domain (e.g., Gick & Holyoak, 1980). Analogy mapping can, however, also refer to the teaching discourse of doing this mapping for the students, that is, having the teacher dissect the relation between two analogous constructs (Harrison & Treagust, 1993; Glynn, 1989), for instance, on a blackboard. In this paper, however, analogy mapping refers to an activity similar to student concept mapping, the differences being that not one, but two concepts are mapped and that these concepts are also mapped onto each other. Analogy mapping, here, refers to a constructivist, student-centered activity wherein students personally dissect and map two constructs onto each other using pencil and paper; it serves as a procedure for cognitive support in coming to understand analogies.

Many studies describe cases in which students do not perceive scientific analogical problems as such (Clement, 1987; Stavy & Berkovitz, 1980; Tsamir, 1992; as cited in Stavy & Tirosh, 1993). This implies, for instance, that since people are very sensitive to surface features, an educational analogy should not be one that can only be perceived if surface features are completely ignored. It also implies the importance of having a student explicate - and come to understand - an analogy by him or herself, since analogies enhance student learning through a constructivist pathway (Duit, 1991).

The perception of any analogy is mainly dependent on semantic knowledge and inference procedures (Gick & Holyaok, 1980) so, ordinarily, you cannot just present any analogy to a student and expect it to clarify matters for them. Apparently it takes some sort of preconstructed knowledge and cognitive capacity to perceive an analogy, let alone to perceive how it might explain the real world concept. It would seem rather inappropriate, then, to present students with an analogy based on an unfamiliar source domain, or one for which students had never noticed a structural relation with the target before (as is the case in many simulations). However, if one can scaffold the entire process of coming to understand a (ready made) analogy, perhaps teachers can obtain optimum return on potentially valuable analogies. This is where analogy mapping as a procedure for cognitive support comes in.

The analogy mapping by students should be geared toward scaffolding the process of ‘analogy understanding’, rather than ‘analogy making’. For this reason, the analogy mapping is based on SMT. As Morrison and Dietrich (1995) put it: “Rather than requiring a specific algorithm for each potential analogy […] the structure (mapping) algorithm makes it possible for any properly constructed knowledge structure to be compared and considered for structure-mapping” (p. 1).

Analogy mapping makes it possible for students to map the correspondences between source and target. An important distinction between pure SMT and the approach used for this research is that here, the differences between source and target should be made explicit as well as the correspondences. This technique is similar to what Kurtz, Miao and Gentner (2001) have dubbed the process of ‘mutual alignment’ or ‘analogical bootstrapping’. In mutual alignment, the emphasis is on juxtaposing two alignable situations and inviting learners to actively identify common structures, as well as the differences between them. Since students are not assumed to be familiar with neither source - they have not worked with the simulation before - nor target, a sort of ‘cross mapping’ is desired. This cross mapping, or mutual alignment, helps students gain insight into two analogous constructs simultaneously, and supports them in making explicit the shortcomings of an analogy, which is vital to properly understanding it.
The rules of analogy mapping.

Though the models (maps) produced by students are not necessarily ‘good’ or ‘bad’, and should allow for a certain amount of creativity, students are obliged to follow at least a few rules that must exist for defining the analogy mapping problem space. The rules of analogy mapping only make sense in a carefully defined rule space. To make maps meaningful, requirements for analogy mapping are that it has:




  • a clear vocabulary (terms/concepts/relations)

  • a clear syntax (rules for forming models/mappings)

  • clear semantics (indications of the meaning of terms and relations)

  • clear quality criteria (rules for determining the quality of an analogy map)

During the current study, all participants are to either complement a ready-made start to a concept map (or analogy map), or create an entirely new one. These techniques, both ‘construct-a-map-from-scratch’ techniques, are chosen in favor of a ´fill-in-the-map´ technique (which requires students to attach labels to a ready made mapping schema), because the ‘construct-a-map-from-scratch’ technique better reflects differences between students’ knowledge structures (Ruiz-Primo, Schultz, Li, & Shavelson, 2001).

The analogy maps produced by students must, ideally, reflect ‘structure interpretations’ (i.e. accurate mapping structures) and ‘deductive interpretations’ (i.e. demonstrations of students’ capability of extrapolating their acquired insights to situations that are not explicitly presented in the simulation). Deductive interpretations can also be tested by means of asking insight-based questions prior to and following the analogy mapping procedure. These are questions that cannot be answered on the basis of explicitly and/or literally presented information.

Map quality can be assessed on the basis of structure interpretations and deductive interpretations. From the structure mapping done by the students, the experimenter must be able to deduce that students have understood the problem. Content/structure correctness is based on interpretations of causal relations, structure, function, and procedures.


What is the nature of the relation between simulation and real-life?

It is for any analogy, or any simulation, vital to know what the exact correspondence with reality is, should an instructor want his or her students to clarify it. Within the SMT paradigm, a distinction is made between three kinds of relation matches, those being ‘mere appearance’, ‘analogy’, and ‘literal similarity’. A mere appearance match refers only to superficial relations between source and target, e.g. “Flickering in the sun, the sea was like a billion diamonds”. An analogy covers similarity of relations, e.g. “The atom is like the solar system”. Literal similarity refers to situations in which all or most predicates can be mapped from source to target. Note that these three are distinguished on a continuum, rather than a discrete subdivision. The current simulation seems to fit somewhere between analogy and literal similarity. However, even though a simulation can be a rather precise reflection of reality, it will never be a literal similarity.

A computer simulation – any simulation - is based on a model which has a certain relation with the real world domain. Zeigler (1976; as cited in Van Joolingen and De Jong, 1991) distinguishes between five concepts in relation to modelling: (1) The real system is the real world domain that is to be modelled; (2) the base model is an abstract but complete description of every aspect and function of the entire real system; (3) lumped models are derived from the base models, and are descriptions of only a subset of the real system, and which are defined by an (4) experimental frame (i.e. the objectives and possibilities the experimenter has). The experimental frame results in (5) filtered data that is both input for and output from the lumped model. Figure 1 depicts the nature of modelling according to Zeigler. To model a system always implies simplifying it, and the lumped model (e.g. a simulation) provides users with the rules, states and objects of which it is comprised (i.e. the experimental frame).

The base model determines how an experimental frame can be specified, and roughly speaking, three of such base models exist: a physical system (A) is a natural system, i.e. one that occurs in the natural world. A model of such a system is composed of observed characteristics of the real world. An artificial system (B) is created by human beings, and is in fact based on a base model, rather than the other way around. An abstract system (C) has no counterpart in the real world; it is used to illustrate effect that are not clearly visible in the real world (Van Joolingen & De Jong, 1991).





Figure 1. Modelling according to Zeigler, adapted from

Van Joolingen and De Jong (1991).
The relation between the simulation used for this research and the real world is not very straightforward. On the one hand, it describes a natural phenomenon (i.e. natural selection), but does so with significant ‘shortcomings’, most important of which is that the user actively and purposely plays the role of ‘natural’ selector, which directly contradicts the directionlessness and purposelessness of evolution. It is exactly for this shortcoming that this simulation is perfectly suited for testing the potential of analogy mapping as a procedure for cognitive support. A learner working with a certain model (in the form of a computer based simulation) needs to validate this model, because if the student is to learn something, then he or she should recognize the model as a representation of an external system (Van Joolingen & De Jong, 1991). This also requires recognizing its shortcomings.

Natural selection as scientific target domain

There are several reasons for choosing the subject of natural selection as scientific domain for the current study. First, natural selection is a good example of a principle that is only understood if it is understood completely. The implications of a single theory may be observed truth separately, but not logically connected in a bigger picture by most observers. Second, it is a principle of which many people hold alternative (incorrect) conceptions that have been extensively and accurately mapped by a great deal of research in the past. This means that understanding can be precisely conceptualized by pre- and post tests, and learning effects due to experimental conditions better assessed, judged and classified. Third, understanding natural selection is of importance to the entire field of biology, making its understanding scientifically vital. Fourth and finally, understanding natural selection is of societal relevance, considering, for instance, the current debate between religious and evolutionary theories in science classes everywhere.


Public understanding of evolution

Many people hold preconceptions of evolution that are at least partly incorrect (e.g. Bishop & Anderson, 1990 ; Bizzo, 1994; Fisher & Lipson, 1986). The terms preconception and alternative conception, rather than misconception, are used here, because the latter might suggest negative connotations with respect to the student’s self constructed ideas. As Clement (1993) points out, in some cases these alternative conceptions are successful adaptations to practical situations in the world. What makes a lot of existing alternative conceptions of natural selection very persistent is that they are often fully operational and logically sound. A Lamarckian view of evolution, for example, is logically sound in the sense that it is not contradictory, and allows for accurate description and prediction, just like Newton’s law of gravity does as long as you do not leave Earth (or look beyond it). This is why Lamarckian evolution has been accepted truth for a relatively long time before Darwin wrote ‘The Origin of Species’. In other words: Though the exact mechanisms of evolution are not always understood, the alternative mental models used by students can nonetheless be successful at a superficial level of explanation and prediction.

What it boils down to is that this paper is concerned with finding a way to help at least some students overcome partly incorrect alternative, yet perfectly internally logical modes of thought! It is because of this dilemma in evolutionary preconception, that the preferred method of education is one that employs some form of a conceptual change approach (e.g. Chan, Burtis & Bereiter, 1997; Jonassen, 2006; Strike & Posner, 1982).
Societal relevance: religious beliefs and intelligent design

Evolution is an important but difficult topic to teach. There is, however, more to teaching evolution than tackling an inherently difficult subject. In many classes, evolution is a touchy and controversial subject for teachers and students alike. But even those who would publicly proclaim themselves to be proponents of the theory of evolution, more often than not, fail to fully grasp the concept of natural selection. This becomes problematic when whether or not evolution should be taught as ‘truth’ becomes a public discussion.

An illustration. A weblog entry (week 9, 2005) by then Dutch minister of education, culture and science reads:
"... I had an interesting talk with Cees Dekker, nano-engineer in Delft and winner of the spinoza prize. He is a believer in the ‘intelligent design’ philosophy. It holds that a ‘designer’ or ‘creator’ is responsible for all existence on Earth. [...] I do not believe in ‘coincidence’ or ‘chance’ either. What binds Islam, Judaism and Christianity is the idea of a ‘creator’. I see possibilities to realize connections here. These connections can particularly be made in the academic debate. If we succeed in uniting scientists of different religious denominations, then their efforts might eventually be applied in schools and classes. A few of my officials will continue talks with Cees Dekker to see how this debate will be materialized." (http://www.kennislink.nl/web/show?id=132896)
In a televised response, renowned Dutch geneticist, cell biologist, columnist and current minister of education Ronald Plasterk remarked: “You could think the Earth is flat, but not if you’re a pilot on transatlantic flights. Similarly, there is no biologist that does not believe in evolution” (Buitenhof, Sunday, May 8, 2005). Two important issues can be distilled from this remark: First, that evolution is undeniably real, and second, that nothing in biology makes any real sense but in light of this, not another, ‘theory’. Stressing the importance of a public understanding of evolution, it must be said that if one would advocate a scientific approach toward investigating, explaining and teaching the origins of life as we know it, then one must first make sure that the public and in-class dialogues concerning these issues do not become matters of faith, but of empiricism, and fundamentally valid arguments. Research has shown that even a lot of people on evolution’s side of the debate, do not fully understand evolution (Greene, 1990). This does not do the public and class debates on evolution versus creationism very good.
Evolution dissected: conception and preconception

Many people, including those who adhere to the theory of evolution and natural selection, have been shown to hold alternative conceptualization of the evolutionary process nonetheless. Much research has been done to allow for a careful and accurate conceptual subdivision of the evolution, as well as the misconceptions that persist among the general public (e.g. Anderson, Fisher & Norman, 2002; Bishop and Anderson, 1985, 1990; Bizzo, 1994; Greene, 1990; Mayr, 1982; Wallin, Hagman & Olander, 2000).

Leading evolutionary biologist Ernst Mayr dissected the knowledge of the theory of natural selection into five facts, from which three inferences are drawn.
Fact 1: All populations have the potential to grow at an exponential rate.

Fact 2: Most populations reach a certain size, then remain fairly stable over time

Fact 3: Natural resources are limited

Inference 1: Not all offspring survive to reproductive age in part because of competition for natural resources

Fact 4: Individuals in a population are not identical, but vary in many characteristics.

Fact 5: Many of these characteristics are inherited.

Inference 2: Survival is not random. Those individuals with characteristics that provide them with some advantage over others in that particular environmental condition will survive to reproduce, whereas others will die.

Inference 3: population change over time as the frequency of advantageous alleles increases. These could accumulate over time to result in speciation. (Mayr, 1982; from Anderson, Fisher, & Norman).


These facts and inferences are logically exhaustive in the sense that if one understands these facts and inferences and how they relate, then one can rather safely be considered to understand, on a superficial level at least, the process of natural selection.

One reason for students suffering from persisting trouble in fully understanding evolution, in spite of having been educated in the subject, is that their knowledge of the subject is organized in such a way that it can still successfully predict and explain events in the real world (Fisher and Lipson, 1986). Furthermore, as Holland, Holyoak, Nisbett and Thagard (1986) suggest, there is probably some hierarchy of models one will use if no other is available. A ‘default’ model is often a naïve conception of the external world based on some subjective attribution of characteristics (to a phenomenon) and/or ‘common sense’ notions.

So how do students conceptualize evolution? To find out how the principle of natural selection is best taught it is vital to understand how it is misunderstood. As it turns out, though sometimes different in surface features, alternative concepts mostly consist of a limited number of typical misinterpretations that are put in two categories for the purpose of this study. The belief in some form of directedness and/or the underestimation of the role of variation in populations are the major culprits in the minds of those who have trouble understanding that evolution is the “non-random selection of randomly varying replicators”.
Directedness

The belief in directedness can have several ´forms´, such as believing in orthogenesis, in which individual traits slowly unfold over generations; Lamarckian evolution, in which individual organisms acquire ‘needed’ changes during their lifetime, which are passed on to future generations; or the belief in a directing agent, such as a God. Directedness also implies a failure to appreciate the combination of both random and non-random processes. The idea that evolution occurs as a result of environmental change is also testament to a belief in directedness (Bishop & Anderson, 1985, 1990; Greene, 1990; Wallin, Hagman & Olander, 2000).


Underestimation of the role of variation

People often do not realize that the unit of evolution is not the individual, but rather the population. The genetic variance in populations is what allows for evolution. Perceiving the role of variation in the evolutionary process is important for coming to realize the ‘blindness’ of evolution, because it implies that evolution is not aimed at helping the individual get ‘stronger’, or ‘better’. Evolution can be recognized as the growing proportions of individuals within a population posessing a certain ‘trait’ (Greene, 1990; Lewontin, 1984; Bishop and Anderson, 1985, 1990; Wallin, Hagman & Olander, 2000). Please refer to appendix 8 for more information on common preconceptions.


An important caveat in evolution teaching is that instructors should pay careful attention to terminology because these might only encourage certain misconceptions to arise or persist (Bishop & Anderson, 1985, 1990; Bizzo, 1994). For instance, the term ‘adaptation’, often used in biology, conveys the notion that organism somehow actively contribute to their own evolution. Another example of bad terminology is ‘fitness’, which conveys the notion that creatures higher in the phylogenetic tree, or creatures stronger or smarter, are somehow fitter or ‘better’. In evolution, fitness is actually only used to denote the relative capability of organisms (or genes) to generate offspring. Even the term natural selection can me misleading, because it implies some sort of premeditative action. Perhaps natural conservation would be a better term.
Aiming for conceptual change

The best way to help students overcome persistent misconceptions (especially those that are at least logically sound) is to assess the particulars of their misunderstanding, and adapt a teaching strategy accordingly. Very broadly speaking, the aim here is to achieve conceptual change in students. Conceptual change can be defined as learning that changes an existing conception - i.e. a belief, idea, or way of thinking. In conceptual change, existing conceptions are fundamentally changed or replaced with a conceptual framework that students can use to solve problems, explain phenomena, and function in their world (Davis, 2001). Teaching for conceptual change primarily involves 1) uncovering students' preconceptions about a particular topic or phenomenon and 2) using various techniques to help students change their conceptual framework. Heuristic procedures, such as analogy based reasoning, are very well fit for the purpose of achieving conceptual change, because they allow reasoners to extend, combine, modify or even replace existing conceptual models by constructing new ones (e.g. Gentner, Brem, Ferguson, Markman, Levidow, Wolff, & Forbus; 1997). In analogy based reasoning, conceptual knowledge in source domains is a powerful source for new ideas in the target domain.

The strategy for achieving conceptual change in this experiment is, firstly, to present participants with an explanatory text (as a practical surrogate for a biology teacher) which contains logical dissection of natural selection by the likes of Mayr and Dawkins, in authentic, meaningful terms. Secondly, this text can then be used for input for an experimental procedure (i.e. working with, and understanding the simulation, and knowing how to map it onto the target construct of evolution.) The text is written in such a way that it takes into account common misconceptions and addresses them implicitly. The reason for not addressing them explicitly is that explaining too much would provide unnecessary static to the outcome of this experiment: The effects (acquired insights) of analogy mapping. Thirdly, a concept or analogy mapping activity is meant to help students organize their thoughts and scaffold the comparison between simulation and real-world natural selection.
Dawkins’ simulation and its 'shortcomings'

Before turning to the method section, it is necessary to describe the simulation used for this experiment, as well as the criticism that it has received (by creationists) for not accurately reflecting reality.

The simulation starts with a population of eight biomorphs (stick-creatures), randomly generated (See figure 2). The objective is to select two individuals (mother and father), by clicking check-boxes with the mouse. If, after checking two boxes, the ‘reproduction’ button is clicked, the first generation, issued from the two selected parents, appears. The ‘children's’ genome (genotype) is a combination of the genome of both parents. However, all children are different from each other due to:


  • the crossover of the genomes of the two parents.

  • the mutation(s) on their own genome.

From the children, again, two parents can be selected to reproduce, continuing the cycle generation after generation. In this manner, the simulation shows that reproduction (non-random selection) and (random) mutation equal evolution.



Parameters

In the original simulation, there were two parameters: Mutations and a gene transcription that was either forced order or random order. Students could control the amount of mutations to occur upon each reproduction. It could be set to ‘high’, ‘medium’, ‘low’, or ‘none’. The idea was that, when set to low or none, the lack of mutation would lead to an impoverished gene-pool, and a slower and more difficult evolution. Since mutation are inherently random, and the simulation should not be made too difficult, the simulation was adapted such that the participants for this research did not have the option of changing gene transcription.





Figure 2: The simulation interface

Evolution

Biomorphs are artificial creatures, but like natural ones have a genotype consisting of a certain code. What you see is the phenotype, but the code is the genotype. The letters that comprise the genotype are all ´genes´. The code consists of nine letters, the first eight of which can be any letter from A to M. The last letter can have a value from I to N. In all, the simulation can display a total of 4,894,384,326 types of different biomorphs.


Fidelity

The fidelity of a simulation refers to how the internal model represents the real system and the way that this representation is presented to a learner (De Jong & Van Joolingen; 1998). Hays and Singer (1989; as cited in Van Joolingen & De Jong, 1998) differentiate between physical and functional fidelity, the former referring to its look and feel, and the latter referring to what can be done with the simulation. Levin and Waugh (1988; as cited in Van Joolingen & De Jong) further dissect physical fidelity into perceptual fidelity and manipulative fidelity, the former referring to look, feel and sound; the latter referring to whether or not the learner acts similar to reality. There are several reasons for preferring high-fidelity simulations over lower fidelity simulations, for instance, for reasons of optimizing knowledge transfer, optimizing motivation and/or optimizing visualization processes (De Jong & Van Joolingen, 1998). Depending on specific possibilities (e.g. manipulability, scaleability, articulation), and restrictions (e.g. costs) a simulation might preferably be, or have the ability to be, different from reality.

There is no single answer to the question of whether or not fidelity is beneficial, as it mostly depends on the domain it covers and the reasons for its use. For the current simulation, fidelity is of marginal mentionability. Both physical and functional fidelity are rather low, but this is exactly what makes the current simulation an excellent candidate for analogy mapping.

Critique of the simulation

Dawkins´ explanation of natural selection has gathered a significant amount of critique, which, oddly enough, is often aimed at his simulation in particular. The fact that the simulation is such a good target for critique is the very reason why it is so suited to the current research. After all, the critiques show that a good simulation can be dangerously misinterpreted. Some examples:


On http://www.answersingenesis.org/docs/264.asp, the author argues that “Dawkins’ [bio]morphs have as much relevance to the origin of the information in living things as sand has to the origin of information in a computer memory (the memory chips are made of silicon extracted from sand). Dawkins’ selects things that look like something recognizable and then he claims that what he gets is the result of blind selection. How illogical!” (retrieved may 15th, 2006)
On http://hotcupofjoe.blogspot.com/2006/04/review-dawkins-god-genes-memes-and.html , the author cites Alister McGrath, author of ‘Dawkins' God: Genes, Memes and the Meaning of Life’ (2005), who states that evolution “does not begin with a target of progression”, and that “both the computer and the program itself are designed”. (retrieved may 15th, 2006)
Of course, these critiques of evolution are not entirely valid, for they are only based on the ‘shortcomings’ of the simulation. The critiques boil down to the idea that, since the simulation is “itself designed” and uses a “seeing person to purposely select”, it cannot possibly reflect a reality that has evolved slowly and selects blindly. However, this or any other simulation is based on an abstract model, meaning that though it is aimed at clarifying some aspect of reality, the model has no counterpart in the real world. Similarly, a simulation of a frictionless world may not depict an existing system, but it does help understand the base model underlying the world of forces around us (Van ´t Hul, Van Joolingen, & Lijnse, 1989; DiSessa, 1982; as cited in Van Joolingen & De Jong, 1991). The same is true for any properly constructed and properly used simulation. The point here is not to get lost in abstraction and lose sight of the sensitive and idiosyncratic relation between the simplified and the actual. It is important to keep in mind why the simulation was created in the first place. Dawkins himself acknowledges that “in the computer model […] the selection criterion is not survival, but the ability to appeal to human whim”(p. 57). Dawkins’ model is nothing more than an organizer of thought, as are most analogies.
The above critiques are cases wherein an analogy is taken as a (flawed) literal similarity, and wherein a whole theoretical framework is denied because of it. In classrooms where similar argumentation might pervade thought, this debate is vital, and must be open.

Since you cannot reproduce a real system as simulation, and since it is impossible to make a simulation that encompasses all logical objects, attributes, relations and processes in evolution, it seems vital to point out the correspondences with reality as well as deficiencies of simulations, and have students understand exactly what aspect of evolution is clarified by a single example or simulation.

It is hypothesized here, that those students encouraged to reason through analogy about the relation between reality and simulation, will be able to see the differences between them better, and are thus better capable of coming to understand the simulation’s shortcomings.

2. Method

Thirty participants were involved in the experiment. All were university students and all of them majored in social science. Eleven of them had taken biology in high school. Participants were aged between eighteen and twenty-five with a mean age of 20.1 years and an SD of 1.98 years. Nine participants were male; twenty-one female. Participants were randomly assigned to experimental conditions, only controlling for the amount of males and females in each group. Males and females were divided between control and experimental groups in almost equal proportions; five males and ten females in the analogy mapping group, and four males and eleven females in the control group. All participants were assumed to be of above average intelligence, since they were all students at a research university. All thirty-two participants were obtained through a student pool and earned credits for participation. None of them were aware of the experiment’s contents or purpose before taking part in the experiment.


Materials

The experiment was partly paper-based, partly computer-based. Questionnaires were paper based, and additional information on natural selection as well as instructions regarding the experiment were presented on a computer. Computer-based parts of the experiment were executed on an Intel Pentium CPU, 2.80 GHz RAM computer running Windows XP and a 17” color monitor, set to a 1024 x 768 pixel resolution. The simulation was written in java by Alain Cogniat and based on an evolutionary algorithm conceived by Richard Dawkins (The blind watchmaker, 1988). The simulation was embedded in a simple html page with white background. The concept and analogy mapping was done using pencil and paper, since no existing concept mapping tools possess all of the features required for decent analogy mapping.


The Conceptual Inventory of Natural Selection (CINS; Anderson, Fisher & Norman, 2002) was used for both pre- and post-testing. It is a diagnostic test that assesses understanding of ten concepts related to natural selection, in part based on the conceptual subdivision by Mayr (1982) mentioned earlier. These concepts are biotic potential - that organisms produce more young than can be sustained by available resources; limited resources - that all members of a species compete; genetic variation - that organisms within a species differ from one another in inherited traits; limited survival - that some organisms do not survive; variation within a population - that organisms within a species differ from one another in inherited traits; origin of variation - that the variations arise through mutation and genetic recombination; inheritance of variation - that mutation and genetic recombination are random events that produce beneficial, neutral, or harmful traits that can all be past on to progeny; differential survival - that among these offspring, those best suited to the environment tend to be most successful in producing young; change in a population - that through differential reproductive success, the frequency of different genetic types in the population can change with each succeeding generation; and origin of species - that when two populations of a single species are separated for an extended period of time by a physical, behavioral, temporal, or other barrier, the populations may diverge to the extent that they become separate species.

The CINS is an extension and improvement of a test designed by Bishop and Anderson (1985, 1990). The difference between the original questionnaire by Bishop and Anderson and the CINS is that the latter uses multiple choice rather than Likert scales, and outlines authentic scenarios for asking questions about, rather than hypothetical situations. The CINS, as well as some additional questions in post-testing test for insight into the principles of natural selection, rather than explicit factual knowledge. The CINS was translated to Dutch, after which it was reviewed by three domain experts at NIOO KNAW (The Dutch Institute for Ecology).


Task

The task of the participants in the control group was to make a concept map of a computer-based simulation of natural selection using the vocabulary of Gentner´s structure mapping theory of analogy-making. This vocabulary consists of objects, attributes, relationships and processes. The task of the participants in the experimental group was to make concept maps of the simulation, as well as the target principle of real-world natural selection, and map these two concepts onto each other to see their (dis)similarities. All participants were handed paper-based instructions on how to use the simulation and mapping software. They had been given a brief, clear, and adequately superficial description of natural selection (appendix 3).

Participants were pre- and post-tested on their insights into the process of natural selection using the CINS. The answers alternative to the correct ones are based on the ten common misconceptions. Using this validated conceptual subdivision, student’s erroneous answers could be classified as arising from one or more common misconceptions.

The post-test was embellished with five open questions aimed at testing student’s insights into the shortcomings of the simulation, as well as four Likert scale questions from the original questionnaire (Bishop and Anderson, 1985; 1990) on which the CINS was originally based. (see appendix 7). The reason for using four additional questions from another questionnaire as post-test measures is that they serve as an extra test for experimental effects (a ‘safety measure’). On average, the experiment lasted slightly less than two hours - approximately thirty minutes for pre-testing, fifteen minutes for instructions, ten minutes for the simulation, twenty-five minutes for concept or analogy mapping and thirty minutes for post-testing.


Procedure

All experiments were carried out in an artificially lit cubicle. Upon entering the cubicle, participants were instructed to turn off their mobile phones, be seated and adjust their seat for optimal comfort. Participants were asked to fill in their name, number, indicate whether they had taken biology in high-school, indicate their self-perceived understanding of the process of evolution, and indicate their theological beliefs and the degree to which they thought the theory of evolution explains life on earth today (see appendix 1).

After filling in the first questionnaire, participants were told that they were going to be tested on their insights into the principles of natural selection in evolution, and that afterwards they would get some additional information on the topic and the opportunity to work with a simulation of natural selection. They were instructed that after working with the simulation, they were to engage in concept mapping or analogy mapping (depending on the experimental condition they were assigned to) so as to increase their understanding of the subject. If they did not know what concept mapping was, they were given a brief explanation. Participants were told that at certain stages during the experiment, they would be prompted to carefully read instructional texts appearing on the screen. These texts could be returned to for later reference. Participants were also informed that at the end of the experiment, they were to do a post-test so as to see how much their understanding of natural selection had increased. If, at any time before or during the experiment, students had questions, they were free to ask the experimenter, though questions related to natural selection and evolution were not going to be answered.

After this brief introduction, participants were handed the Conceptual Inventory of Natural Selection (Anderson et al, 2002; appendix 2) on paper, and told to take their time filling it in, think hard about the answers and realize that only one of the four alternative answers for each question was absolutely correct.

After completion of the CINS, participants read a brief description of natural selection in terms equally superficial as those they were, and were going to be tested on (appendix 3). They were told to read it well, because they would be tested on their understanding of it later. After finishing reading the description of natural selection, participants could forward through the html environment, and were presented with instructions on using the simulation (see appendix 4). In addition to containing pointers on using the simulation, this instruction also pointed out explicitly to the participants that even though the simulation was meant to help understand the principle of natural selection, that they should realize that the simulation only serves as an abstraction, an analogy, of real-life natural selection. The instructions explicitly stated that because of this, the simulation was not a perfect reflection of reality. Since students can differ in many respects when working with, interpreting, and drawing conclusions when experimenting (Schauble, Glaser, Raghavan, & Reiner, 1991), instruction was aimed at allowing maximum generalization over students after experimenting. This was done by telling students to ´pay attention´ to and ´think about´ how the simulation reflects reality, for they were to be ´inquired on their conclusions later´. Students were also told to ´try and understand´ the process of natural selection as reflected in the simulation.

After reading the instructions for the simulation, students were prompted to start experimenting with the simulation.

Following working with the simulation for approximately five minutes, participants were told to shut down the application. At this point the procedure for experimental and control group started to diverge. Participants from the control group were told they were going to draw a concept map of the simulation. The computer screen presented them with a set of instructions on concepts mapping and to which set of rules their concept map should obey (appendix 5). The experimental group, on the other hand, was told they were going to be drawing an analogy map and presented with a set of corresponding instructions which differed somewhat from the control group’s (See appendix 6). In the experimental condition, students were not only instructed to make a concept map of the simulation, but of the real principle of natural selection as well, and to do this in such a way that corresponding, analogous concepts could be linked between the two maps.

In both the experimental and control groups, participants were told that the vocabulary of mapping concepts consisted of objects, attributes, relations and processes, in line with the conception of analogy in structure mapping (e.g. Gentner, 1983; Falkenhainer, Forbus, & Gentner, 1986)For all participants, a beginning to mapping the simulation was made to help them on their way (see figure 3).






Figure 3. When starting work on the concept map,

this partial map is presented to participants to elaborate on.
Once participants were finished, maps were handed to the experimenter. After mapping, participants were again asked to fill in the CINS, of which the order of questions and alternative answers had been changed. This time, participants were also presented with five additional questions for assessing their understanding of those aspects of natural selection that were poorly reflected in the simulation, as well as four Likert scale questions from the test devised by Bishop and Anderson (1985, 1990; see appendix 7). Finishing these questions meant the experiment was over.

3. Results

On pre-test CINS, the concept mapping group gave an average of 12,53 correct answers out of twenty, whereas the analogy mapping group scored an average of 13,07 correct answers out of twenty. This difference is not significant.

On post-test, the concept mapping group had, again, performed worst: 13,87 out of twenty correct answers, against 14,73 out of twenty for the analogy mapping group. Both groups have improved somewhat, but the effect hypothesized was not found; analogy mapping did not result in more improvement than concept mapping. The difference is not significant.

Table 1 shows the results of an independent samples t-test of the difference between analogy mapping and concept mapping groups. The comparison here is on the basis of the differences between CINS pre- and post test scores (improvement). The concept mapping (control) group can be seen to have given, on average, 1.4667 more correct answers on the post-test than on pre-test. For the analogy mapping (experimental) group this number is larger. The differences, however, are not significant (t(28)= -.253, p=0 .8).






group

N

Mean

Std. Deviation

Std. Error Mean

Difference

analogy mapping

15

1.6667

1.95180

.50395

concept mapping

15

1.4667

2.35635

.60841



Table 1. The difference between the scores on pre- and post tests for all participants.
For this study it was hypothesized that since evolution and natural selection are internally logical process systems, that it must theoretically be possible to have learners understand the entire complexity of these processes on the basis of just a limited number of premises. In other, more practical terms: This study assumed that if students fully understand one aspect of evolution – say, the origin of variation – that it would be very likely that they would also understand other, logically related issues – say, the importance of genetic variation for evolution to occur. If students would be able to do this, they would be displaying insight.

It is also possible, however, that such an assumption is incorrect, and that student’s do not reason about evolution in a manner that is logically similar to how evolution actually works, i.e. that their insight is lacking. If students are unable to deduce all the logical implications from a single (or a few) premises(s) given, for instance, by the simulation used in this study, then it is important to have a look at just those questions in the CINS that address the logical premises that are more or less explicitly reflected in the simulation.

Since Dawkins’ initial goal for the simulation was to exemplify that evolution is the “non-random selection of randomly varying replicators”, it seemed appropriate to repeat the above analysis using only those CINS questions that implicitly or explicitly refer to this statement, i.e. questions relating to variation, the origin of variation, and variation inherited. Results were similar, however, and it is therefore safe to conclude that the analogy mapping condition did not provide any advantage over the concept mapping condition in this experiment.

A second post-test was administered to all participants mainly to see whether there would be any difference between control and experimental groups in terms of their insight into the shortcoming of the simulation. In fact, there was not. Of all thirty participants, only nine realized that being able to select the biomorphs yourself was a ‘flaw’ of the simulation. Four of them were in the experimental group.

The most interesting conclusion one can draw from the second post-test derives from the answers given to the four Likert-scale questions in the end: Even though some students had earlier seemingly shown that they understood certain aspects of evolution, such as the randomness of variation, these Likert-questions revealed that students, in fact, were still not willing to abandon the idea of a ‘need’ or ‘purpose’ as a reason for evolutionary change. A considerable part of the students had an apparent difficulty answering at the extremes of the Likert-continuum. A analysis of covariance (ANCOVA) reveals a covariance of -,409 (p= ,255) between scores on Likert questions and scores on six relevant CINS questions.
Three variables of interest (gender, having enjoyed biology education or not, and belief) were further inspected on their influence on test scores. Table 2 displays how these were distributed in the participant pool.



Men (N=9)

Biology

Evolutionist

N=3

Non-evolutionist

N=0

No Biology

Evolutionist

N=5

Non-evolutionist

N=1

Women (N=21)

Biology

Evolutionist

N=6

Non-evolutionist

N=2

No Biology

Evolutionist

N=7

Non-evolutionist

N=6

Table 2: The distribution of gender, belief and having enjoyed biology education in the participant group.
Biology

Students who had taken biology in high school scored only slightly better than students who did not: 13.00 against 12.80 correct answers on pre-test and 14.62 against 14,10 on post-test. The difference is not significant.


Belief

There was also a difference between students who indicated believing that evolution was the sole explanatory power for life on Earth, and students who thought that life could also be explained by some driving force (e.g. God). On pre-test, ‘evolutionists’ scored an average of 13,57 correct answers, whereas the others scored 11 correct answers on average, t(28)= 1.823, p= .079. On post-test, evolutionists gave an average of 15,52 correct answers, against 11,44 for the others, t(28)=2.967, p=.006. The average difference between pre- and post-test is an improvement of 2.05 correct answers for evolutionists, and an improvement of only 0.44 for the others. (t=(28)=1.984, p=0.0057).


Gender

Males scored much better on both pre- and post-tests than women did, with women scoring an average of 11,38 and 13,14 correct answers on pre- and post-tests, and men scoring an average of 16,11 and 17. (T(28)=3.959, p < .001, and T(28)=2.760, p = .010). Women, however, do improve more between pre- and post-tests, but no conclusions surrounding this interaction effect can be drawn.



Concept and analogy mapping quality

Driven by the results from the second post-test, a final analysis was to see whether there is a relation between the quality of mappings and scores on the six relevant post test multiple choice questions on the one hand, and between the quality of the mappings and the Likert questions on the other. The analysis was done for all participants, and for the concept and analogy mapping groups separately. The formal criteria for assessing map were looking at structure interpretations and deductive interpretations, as well as level of detail (extensiveness) and accuracy (correctness). However, as became clear rather quickly upon studying maps, students should be allowed some room for creativity in mapping, and should not be forced to include everything an instructor is looking for in a map. See figure 4 for an example of a typical analogy map. This particular student used crosses to symbolize death, or simply not being selected for reproduction, and did not quite know how to integrate mutation in the model so he just wrote the word ‘mutation’ where he thought it would occur. Such creativity and free interpretation of analogy mapping should not be discouraged. Of course, students should always have the opportunity to clarify their model. By distinguishing between selection and natural selection, this student identified an important difference between the simulation and reality, i.e., that selection by self is fundamentally different from natural ´selection´.


Simulation Source Real-world target



Figure 4. A computer re-creation of a simple, but good analogy map.
Maps were lined up in order of quality for concept mapping and analogy mapping groups, and then divided into three groups of five maps (or ten maps if you do not distinguish between concept maps and analogy maps). This way, quality of the maps was divided into three categories: bad, average, and good. Values from 1 to 3 were assigned to these categories respectively.

A nonparametric Spearman´s rho correlation between the six relevant post test multiple choice questions, and the quality of the mappings (for all participants) is -.012 (p=.949). The Pearson correlation between questions between scores on the Likert scale questions and the quality of the mappings is .316 (p=.089) for all participants. None of these results are significant, but what is interesting here, however, is not so much the separate spearman correlations, or their (in)significance levels, but rather the differences between these correlation levels (.949 and .089). Good mappings seem to be better at predicting scores on Likert scales then they are at predicting scores on multiple choice scales. If it is assumed that good mappings are a reflection of proper understanding, then it seems that Likert scale questions are much better indicators of the same understanding than multiple choice questions are. In any case, it is clear that there is a big difference in the way these two kinds of questions tap into and reflect existing knowledge and understanding.



Yüklə 258,88 Kb.

Dostları ilə paylaş:
  1   2   3   4   5




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin