Information Quality: Originally, Bailey and
Pearson [2] developed a 39-item instrument to measure
system success. Based on Bailey and Pearson’s work,
Raymond [32] later reduced the number of items down to
20 items and factor-analyzed them. four factors emerged,
namely output quality, user-system relationship, support,
and user relationship with EDP staff. Yoon, Guimaraes,
and O’Neal [42] further adapted the instrument to
measure expert system success by excluding the items
measuring the last two factors (i.e., management support
and user relationship with EDP staff). The instrument was
subsequently used by Guimaraes, Staples, and McKeen
[15] with minor adaptation to Yoon et al.’s instrument.
Their adaptation of the system quality instrument,
comprised of 10 items, was intended to measure user
satisfaction with the quality of an information system.
However, because this study is intended to measure the
quality of informational outputs, instead of asking users to
rate their satisfaction on their systems, users were asked
to rate the factual quality of information received from the
systems. Three items related to user-system relationship
are dropped from Guimaraes et al.’s instrument because
they are not related to the quality of information. In
addition to the items from Guimaraes et al., three items
measuring the quality of contents of the information
provided by the system were added to the instrument. The
three items were adapted from Doll and Torkzadeh [7]’s
measure of end-user computing satisfaction. Thus, users
will be asked to rate 9 items in the instrument, on a scale
from 1 to 5 (where 1 = “not at all” and 5 = “great extent”),
which measures the following aspects of information
quality: a.) output value, b.) timeliness, c.) reliability of
the output, d.) response/ turnaround time, e.) accuracy of
the output, f.)completeness of the output, and g.) content-
preciseness, h.) content-achievement, and i.) content-
sufficiency. (See Appendix C.)