See discussions, stats, and author profiles for this publication at:
https://www.researchgate.net/publication/264138574
The Influence of the Design of Web Survey Questionnaires on the Quality of
Responses
Article
in
Survey Research Methods · March 2008
DOI: 10.18148/srm/2008.v2i1.598
CITATIONS
126
READS
4,351
1 author:
Some of the authors of this publication are also working on these related projects:
Textual data analysis
View project
Visual research methods to capture consumers' emotional and experiential responses towards brands
View project
Stephane Ganassali
Université Savoie Mont Blanc
39
PUBLICATIONS
337
CITATIONS
SEE PROFILE
All content following this page was uploaded by
Stephane Ganassali
on 23 July 2014.
The user has requested enhancement of the downloaded file.
Survey Research Methods (2008)
Vol.2 , No.1 , pp. 21-32
ISSN 1864-3361
http:
//www.surveymethods.org
c
European Survey Research Association
The Influence of the Design of Web Survey Questionnaires on the
Quality of Responses
St´ephane Ganassali
I.R.E.G.E. - University of Savoie
The first objective of this article is to propose a conceptual framework of the e
ffects of on-line
questionnaire design on the quality of collected responses. Secondly, we present the results of
an experiment where di
fferent protocols have been tested and compared in a randomised design
using the basis of several quality indexes.
Starting from some previous categorizations, and from the main factors identified in the litera-
ture, we first propose an initial global framework of the questionnaire and question characteris-
tics in a web survey, divided into five groups of factors. Our framework was built to follow the
response process successive stages of the contact between the respondent and the questionnaire
itself.
Then, because it has been studied in the survey methodology literature in a very restricted
way, the concept of ‘response quality’ is discussed and extended with some more ‘qualitative’
criteria that could be helpful for researchers and practitioners, in order to obtain a deeper as-
sessment of the survey output.
As an experiment, on the basis of the factors chosen as major characteristics of the question-
naire design, eight versions of a questionnaire related to young people’s consumption patterns
were created. The links to these on-line questionnaires were sent in November 2005 to a target
of 10,000 young people. The article finally presents the results of our study and discusses the
conclusions. Very interesting results come to light; especially regarding the influence of length,
interaction and question wording dimensions on response quality. We discuss the e
ffects of
Web-questionnaire design characteristics on the quality of data.
Keywords: Web surveys, questionnaires, response quality
Introduction
Web-based surveys have been substantially developing
for the last ten years. The Esomar association (2004) esti-
mates that in the United States, more than one third of mar-
ket research is now conducted through on-line surveys. An
international professional panel run by a leading survey soft-
ware editor on more than 7,000 institutions indicates that in
2006, 32% of them implemented on-line surveys, through an
internal or an external network. Simultaneously to this prac-
titioner concern, academic research was gradually becoming
interested in the topic and was producing numerous contri-
butions, in order to better understand these new methods of
data collection. Logically, the first papers focused on the
description of the various technological devices (Galan and
Vernette 2000), with a view to pointing out the opportuni-
ties and the drawbacks of these new protocols (Ilieva, Baron
and Healey 2002; Couper 2002). Internet surveys have been
compared to other self-administered methods or to telephone
protocols (Roster et al. 2004), mainly on the response rate
criteria (Schmidt et al. 2005) and more recently on response
Contact information: St´ephane Ganassali, I.R.E.G.E. - Univer-
sity of Savoie, 4, chemin de Bellevue - BP 80439 - 74944 Annecy-
le-Vieux Cedex - France,
+33 450 09 24 00, email: sgana@univ-
savoie.fr
quality (Fricker et al. 2005).
It is now established that web-based surveys are inex-
pensive, with a short response time and that they can achieve
satisfying response rates compared to questionnaires deliv-
ered by ‘classical’ mail. Additionally, the nature and the
quality of responses are not inevitably a
ffected (Tuten, Urban
and Bosnjak 2002). Some authors even suggest that on-line
surveys provide more complete information than traditional
mail surveys do (Ilieva, Baron and Healey 2002). They can
also avoid some data quality problems such as social desir-
ability bias (Fricker et al. 2005) or survey ‘satisficing’ pat-
terns, (Skitka and Sargis 2005). For researchers, Internet sur-
veys can also facilitate the use of embedded experiments (Mc
Fadden et al. 2005).
After several years of experience, we consider that web
surveys are especially well adapted to internal surveys (sta
ff
evaluation or social satisfaction), to access panels and more
generally to a well identified target population, particularly
in a Business-to-Business context (Roster et al.
2004),
for customer satisfaction surveys for example. As far as
Business to Consumer surveys are concerned, the medium
coverage could still be a methodological di
fficulty. Even
if this problem is now gradually decreasing, it could still
be dissuasive in many cases. Because of the inability to
identify all on-line users, web-based surveys do not pro-
vide generalisable results, due to self-selection, non-random
and non-probabilistic sampling (Andrews, Nonnecke and
21
22
ST ´
EPHANE GANASSALI
Preece 2003). But comparing data from online and telephone
(Schillewaert and Meulemeester 2005) or face-to-face pro-
tocols (Du
ffy et al. 2005), some experiments showed that
the nature of the responses can be similar (for interests, atti-
tudes or voting intentions for example) or sometimes di
ffer-
ent (knowledge or behaviour patterns). For some other au-
thors (Roster et al. 2004), web surveys may be equally, if not
more, accurate than phone surveys in predicting behaviours.
In academic research, a movement has been progres-
sively established with a view to defining the circumstances
in which we could obtain the best response quality in a Web
survey. Our paper is part of this research trend. In fact, in
recent publications, numerous experiments on the topic pro-
vide us with some very promising results. However, we think
that past research have three major limitations, which we will
mainly address in this article:
• no conceptual framework is really available to give an
exhaustive description of the topic: a lot of experimen-
tal studies are available but most of them are often lim-
ited in scope and do not take all the specific aspects of
a web survey into account,
• some important features specific to web surveys (illus-
tration and especially interaction) have seldom been
studied in the past,
• response quality has received much less research at-
tention than response rate, and with a rather restricted
view.
Then, the objectives of our research are:
1. to propose a general conceptual model in order to
study the e
ffects of on-line questionnaire design on the
quality of collected responses,
2. to report experimental results to test how four major
questionnaire features would influence a wider range
of response quality indexes.
The decision to participate in a
survey and the response process
Before proposing a theoretical framework of the deter-
minants of response quality in a web-survey, it is necessary
to describe the decision process implemented when a person
is asked to participate in a survey and also the components
of the response process itself. As far as the decision process
is concerned, from a psychological point of view, several au-
thors (Groves, Cialdini and Couper 1992) concentrated on
the works made on “compliance warrants”. The “compli-
ance with requests” approach insists on six principles that
influence the decision to perform a requested activity, such
as the active participation in a survey: reciprocation, consis-
tency (“desire to be consistent within attitudes, beliefs, words
and deeds”), social validation (how similar others are acting),
authority, scarcity and liking. In addition to these theoreti-
cal concepts, Groves, Cialdini and Couper (1992) introduced
some practical knowledge drawn from professional survey
interviewers’ contributions. Two important components are
outlined by experienced interviewers within the techniques
they use to obtain good participation: tailoring (Groves and
Couper 1998) and maintaining interaction. On the basis of
the classical literature on persuasion and attitude change,
Bosnjak and Tuten (2001) established the importance of mo-
tivation, opportunity and ability in the message information
process. On the basis of these psychological theories, the
authors reviewed the factors that may influence the partici-
pation in a survey. They are basically divided into four cat-
egories: the societal-level factors, the characteristics of the
sample person
, the attributes of the interviewer and finally,
the attributes of the survey design. The performance of these
attributes is a
ffected by the sample person individual charac-
teristics, on the basis of the “leverage-salience theory” pro-
posed by Groves, Singer and Corning (2000).
The components of the response process itself were fully
described by Tourangeau, Rips and Rasinski (2000). The
process is made of four stages: comprehension, retrieval,
judgment and finally response. For them, the presentation
of the questionnaire is one of the most important variables
that may a
ffect the response process especially at both the
comprehension and reporting (response) stages. They con-
clude their chapter on the comprehension component by giv-
ing some practical advice for survey designers, that are gen-
erally “consistent with the evidence and theoretical analyses”
presented by the experts (Bradburn and Sudman 1979, Con-
verse and Presser 1986). This advice covers various aspects
of the questionnaire design, and focuses on the importance
of simplicity of syntax.
Most of the authors say (Couper, Traugott and Lamias
2001, Dillman 2000) – and we also think – that as far as
web surveys
are concerned, a deeper investigation is needed
on the relationships between the questionnaire characteristics
and the response patterns. This is probably because they are
in fact the only factors that can be really manipulated when
implementing a web survey and also, because electronic sur-
veys o
ffer a wide range of design possibilities that can have
a great influence on the quality of the collected data (Couper
2002). Moreover, it is known that within self-administrated
surveys, in the absence of an interviewer, the respondent
tends to seek information from the instrument itself: the ver-
bal and visual elements of the questionnaire (Schwarz 1996).
The questionnaire characteristics
a
ffecting the response patterns
Within the literature dedicated to survey methodology,
many contributions provide tips and hints on how to write a
‘good’ questionnaire in order to get ‘good’ responses. De-
spite the voluminous mass of relevant research data concern-
ing response e
ffects, few theoretical frameworks are avail-
able to structure this knowledge. Based on the Bradburn
and Sudman (1979) proposals, Dijkstra and Van der Zouwen
(1982) first designed a general model of the survey inter-
view. It is divided into three sets of variables influencing
the response patterns: the characteristics of the questions
themselves, the interviewer variables and the respondent
variables. More specifically, the question factors are split
into two groups: the formal characteristics and the content-
related
ones. Structural-task characteristics (such as method
of administration, instructions and so on) are described as
THE INFLUENCE OF THE DESIGN OF WEB SURVEY QUESTIONNAIRES ON THE QUALITY OF RESPONSES
23
moderating variables that would condition the relationships
between the basic factors and the response patterns.
On the basis of previous research, our framework is built
to consistently follow the successive stages of contact be-
tween the respondent and the questionnaire itself. First, the
person quickly sees the length of the questionnaire (or the
number of screens) that represents the level of e
ffort required
to answer. Secondly, the respondent will get an impression of
the conviviality of the form according to the balance between
texts and illustrations. The third step would be the reading of
the questions themselves and the assessment of the wording.
Then the respondent is supposed to apprehend the interactive
components of the survey-interview (tailoring) that could be
crucial for web-surveys in the absence of an interviewer. Fi-
nally, the person would successively look at the response for-
mats and would know exactly what kind of task and what
kind of data is expected: ticks, numbers, texts, etc.
General structure and length of the questionnaire
From the beginning of the history of research on sur-
vey methodology and design, this first category of question-
naire characteristics has been very frequently studied and
more specifically the length of the questionnaire. Common
sense first suggests that a long questionnaire will obtain a
lower response rate than a short one. Some contributions
recommend an optimal length ranging between 15 and 30
questions for self-administrated questionnaires, even if it can
be empirically considered as too brief for substantial market
and academic research. Much research focuses on the e
ffect
of the length of the questionnaire on the return or response
rates. In many general contributions on survey methodol-
ogy (Foddy 1993 for example), one recommends a concise
drafting. A too long questionnaire would produce on the re-
spondent an e
ffect of ‘weariness’. A tendency to reproduce
systematic answers (and thus to reduce their variability) is
also reported at the end of long questionnaires (Herzog and
Bachman 1981). As a matter of fact, the literature about ei-
ther traditional mail surveys or Internet-based surveys pro-
vides mixed results. For traditional surveys, Dillmann’s To-
tal Design Method (1983) stated that a mail questionnaire
must be perceived as easier and faster to complete and more
visually appealing and interesting, to obtain higher response
rates. However, a complete quantitative review concludes
that the questionnaire length is almost uncorrelated with the
response rate, there seems to be “a negative but very weak re-
lation” between the variables (Yu and Cooper 1983). Then, if
we consider the findings of three recent reviews made specifi-
cally on Web-based surveys, the results seem to be contrasted
(Galesic 2002). On the one hand, statistically, the question-
naire length is not particularly associated to the response rate
(Cook, Heath and Thompson 2000). On the other hand, re-
searchers and practitioners stress the length of the question-
naire as the largest problem for high drop-out rates (Lozar
Manfreda and Vehovar 2002).
Apart from the number of questions, the length of the
questionnaire can also be perceived by the respondent on the
basis of the number of screens, for example, the distinction
between one and multiple-page design has been frequently
discussed (Couper, Traugott and Lamias 2001; Lozar Man-
freda, Batagelj and Vehovar 2002; Reips 2002; Ganassali and
Moscarola 2004). One of the conclusions was that a one-
page design resulted sometimes in higher item non-response
or in more non-substantive answers. More generally, it is
accepted that di
fferent questionnaire structures can lead to
di
fferent response patterns.
Obviously, the length of the questionnaire is linked to
the required e
ffort perceived by the target audience of the
survey. To help the respondent to estimate his
/her position
in the completion process (and to indicate how far they are
from the end), some authors advise to use a point of comple-
tion (POC) indicator (Dillmann, Tortora and Bowker 1998).
It seems that a POC indicator would reduce dropouts later in
the survey (Healey, Macpherson and Kuijten 2005) but if the
questionnaire is very long, it may not be e
ffective in reducing
break-o
ffs (Conrad, Couper and Tourangeau 2003).
Internet-based surveys o
ffer the opportunity to track
more precisely the respondent behaviour during the inter-
view session. Previously used by some researchers, the log
files
provide factual information about the on-line response
process: how the person navigates from one screen to an-
other, how many pages are seen and above all, where he or
she quits. In our study, with the information available on
the ‘SphinxOnLine’ survey server, we had the opportunity to
analyse the survey log files in order to measure the drop-out
rates and the ‘drop-out points’.
Intensity of illustration
The Internet has introduced new opportunities for the il-
lustration of questionnaires. In surveys, we can distinguish
verbal information from visual information. This visual in-
formation can be displayed on three di
fferent levels: ques-
tions on which images play a major part (such as brand
recognition questions for example), images as supplements
with the text (i.e. embellishments, illustrations) and inciden-
tal images (i.e. background). The most problematic situation
seems to be the second because the target audience might
not know whether the images are designed as task or style
elements (Couper, Tourangeau and Kenyon 2004). Opinions
are divided on the impact of illustrations on the quality of the
responses. On the one hand, pictures may enhance the at-
tractiveness of the questionnaire
and may make it more con-
vivial for the respondent; on the other hand, these visual fea-
tures make the questionnaire more di
fficult to access or com-
plete, which could reduce the response rate (Deutskens et al.
2004). Couper, Tourangeau and Kenyon (2004) found little
support for a positive e
ffect of illustrations on respondents’
enjoyment or reduction of perceived burden. However, ex-
ploring presentational influences, Ganassali and Moscarola
(2004) have measured increased responses when relevant vi-
sual clues are presented in web interviews. More investi-
gation is needed to test the e
ffects of these various types of
illustration on response quality.
24
ST ´
EPHANE GANASSALI
Question wording
The length of the questions, the grammatical syntax and
the level of language have been frequently studied in the sur-
vey methodology literature. In this area, we find a lot of
general contributions that provide advice on “how to write
good questions”. For example, some authors traditionally
suggest that the length of the questions should not exceed
20 words (Payne 1951) or that short questions would reduce
the probability of respondents misunderstanding (Molenaar
1982). Some experimental research have tried to measure
the e
ffects of these wording features on the quality of the
responses and they do not seem to strongly support this hy-
pothesis (Bogen 1996).
As far as the wording itself is concerned, the method-
ological guidelines agree to recommend a ‘simple syntax’.
A complex wording would also lead to a higher probability
of misunderstanding and consequently to a lower response
quality (Foddy 1993). However, in the literature, the concept
of ‘complex syntax’ is seldom defined or measured. Belson
(1981), after studying more than 2,000 questionnaires, built
a framework of the so-called ‘di
fficult’ questions where the
more frequent ones are questions with special clauses, nega-
tive questions, questions including conjunctions, or multiple-
timed questions. From another point of view, Brennan and
Holdershaw (1999) demonstrated that the length, form and
cue tone of an open-ended question have a significant e
ffect
on both the length and tone of the generated responses. On
these aspects again, more detailed research is needed to de-
fine and evaluate the e
ffects of the grammatical syntax of the
questions, on the quality of the generated responses.
Interactivity
For most authors (see Stewart and Pavlou 2002), the
Dostları ilə paylaş: |