the measurement of web-customer
TRANSCRIPT
-
8/3/2019 The Measurement of Web-Customer
1/20
Information Systems Research, 2002 INFORMSVol. 13, No. 3, September 2002, pp. 296315
1047-7047/02/1303/0296$05.001526-5536 electronic ISSN
The Measurement of Web-CustomerSatisfaction: An Expectation and
Disconfirmation ApproachVicki McKinney Kanghyun Yoon Fatemeh Mariam Zahedi*
Sam M. Walton College of Business, Information Systems Department, University of Arkansas, 204 Business Building,
Fayetteville, Arkansas 72701-0201
School of Business Administration, University of Wisconsin at Milwaukee, P.O. Box 742, Milwaukee, Wisconsin 53201
School of Business Administration, University of Wisconsin at Milwaukee, P.O. Box 742, Milwaukee, Wisconsin 53201
[email protected] [email protected] [email protected]
O nline shopping provides convenience to Web shoppers, yet its electronic format changesinformation-gathering methods traditionally used by customers. This change raisesquestions concerning customer satisfaction with the online purchasing process. Web shopping
involves a number of phases, including the information phase, in which customers search for
information regarding their intended purchases. The purpose of this paper is to develop the-
oretically justifiable constructs for measuring Web-customer satisfaction during the informa-
tion phase.
By synthesizing the expectation-disconfirmation paradigm with empirical theories in user
satisfaction, we separate Web site quality into information quality (IQ) and system quality
(SQ), and propose nine key constructs for Web-customer satisfaction. The measurements for
these constructs are developed and tested in a two-phase study. In the first phase, the IQ and
SQ dimensions are identified, and instruments for measuring them are developed and tested.In the second phase, using the salient dimensions of Web-IQ and Web-SQ as the basis for
formulating first-order factors, we develop and empirically test instruments for measuring IQ-
and SQ-satisfaction. Moreover, this phase involves the design and test of second-order factors
for measuring Web-customer expectations, disconfirmation, and perceived performance re-
garding IQ and SQ. The analysis of the measurement model indicates that the proposed metrics
have a relatively high degree of validity and reliability. The results of the study provide reliable
instruments for operationalizing the key constructs in the analysis of Web-customer satisfac-
tion within the expectation-disconfirmation paradigm.
(Web Customer; Satisfaction; Information Quality; System Quality; Web-Information Satisfaction;
Web-System Satisfaction; Construct Validity; MTMM Analysis)
IntroductionIn a turbulent e-commerce environment, Internet com-
panies need to understand how to satisfy customers to
*Names listed alphabetically.
sustain their growth and market share. Because cus-
tomer satisfaction is critical for establishing long-term
client relationships (Patterson et al. 1997) and, con-
sequently, is significant in sustaining profitability,
a fundamental understanding of factors impacting
Web-customer satisfaction is of great importance to
-
8/3/2019 The Measurement of Web-Customer
2/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 297
e-commerce. Furthermore, the need for research in
Web-customer satisfaction (called e-satisfaction by
Szymanski and Hise 2000) has been accentuated by theincreasing demand for the long-term profitability of
dotcom companies and traditional companies that are
Net-enhanced (Straub et al. 2002a).
Satisfaction is the consequence of the customers ex-
periences during various purchasing stages: (a) need
arousal, (b) information search, (c) alternatives evalua-
tion, (d) purchase decision, and (e) post-purchase be-
havior (Kotler 1997). During the information-search
stage, the Internet offers extensive benefits to Web cus-
tomers by reducing their search cost and increasing
shopping convenience, vendor choices, and product op-
tions (Bakos 1998, Alba et al. 1997). However, the onlineshopping experience depends on Web site information
to compensate for the lack of physical contact and
causes customers to rely heavily on technology and sys-
tem quality to keep them interested and serviced as they
explore e-stores with ease and pleasure. In other words,
consumers make inferences about product attractive-
ness on the basis of: (1) information provided by retail-
ers and (2) design elements of the Web site such as ease
and fun of navigation (Wolfinbarger and Gilly 2001).
Palmer and Griffith (1998) observed that Web site de-
sign is an interaction between marketing and techno-logical characteristics. Lohse and Spiller (1998) showed
designing online stores with user-friendly interfaces
critically influences traffic and sales, and Szymanski
and Hise (2000) found product information and site de-
sign critical in creating a satisfying customerexperience.
Given the roles of information content and system
design in Web-customer satisfaction, this study fo-
cuses on identifying and measuring the constructs for
the process by which Web-customer satisfaction is
formed at the information search stage. In doing so,
we synthesize the information systems (IS) research on
user satisfaction with the marketing perspectives oncustomer satisfaction to explore the role of expectation
and disconfirmation regarding information quality
(IQ) and system quality (SQ), which may shed some
light on the process by which Web satisfaction is
formed. Insight into this process could help Web-based
businesses improve their customers satisfaction, thus
enhancing the effectiveness of e-commerce for both
sellers and buyers. Hence, the purpose of this research
is to identify key constructs and corresponding mea-
surement scales for examining the expectation-disconfirmation effects on Web-customer satisfaction.
In the identification and development of constructs, a
model of the expectation-disconfirmation effects on
Web-customer satisfaction (EDEWS) provides the un-
derlying foundation for the measurement model that
explains the structure and dimensionality of the pro-
posed constructs.
Theoretical PerspectivesEnd-user satisfaction is an important area of IS re-
search because it is considered a significant factor inmeasuring IS success and use (Ives and Olson 1984,
Doll and Torkzadeh 1988, DeLone and McLean 1992,
Doll et al. 1994, Seddon 1997). Although many studies
in end-user satisfaction do not explicitly separate in-
formation and system features when identifying the
structure and dimensionality of the user-satisfaction
construct, DeLone and McLean (1992) made an explicit
distinction between information aspects and system
features as determinants of satisfaction. Based on IS
success literature, DeLone and McLeans highly cited
model (1992) identified IQ and SQ as antecedents of
user satisfaction and use.A similar separation of theoretical constructs can be
found in marketing. In modeling overall satisfaction,
Spreng et al. (1996) identified attribute satisfaction and
information satisfaction as antecedents of satisfaction.
Information satisfaction is based on the quality of the
information used in deciding to purchase a product,
whereas attribute satisfaction measures the consumers
level of contentment with a product (Spreng et al. 1996,
p. 17). Szymanski and Hise (2000) found that aspects
associated with product information and Web site de-
sign are important determinants in forming customer
satisfaction.
For online shopping, the experience of using a Web
site during the information-search phase could be af-
fected by IQ factors (e.g., a richer product description)
and SQ factors (e.g., other links; see Jarvenpaa and
Todd 1996, 1997). Considering satisfaction in the Web-
usage environment, Pitt et al. (1995) observe that in-
formation is the dominant concern of the user, while
-
8/3/2019 The Measurement of Web-Customer
3/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
298 Vol. 13, No. 3, September 2002
Figure 1 The Model for Expectation-Disconfirmation Effects on Web-Customer Satisfaction (EDEWS)
the delivery mechanism is secondary. Furthermore,
Katerattanakul and Siau (1999) and Zhang et al. (2000)
note that an important role of Web sites is information
delivery and that the quality of information is consid-
ered critical in e-commerce. At the same time, the Web
sites performance in delivering information can be in-dependent of the quality or nature of the information,
thus making it possible to have a clearer distinction
between Web site information and its system. While
distinguishing between IQ and SQ may not be wide-
spread in traditional IS studies, such a distinction is
clearly possible on the Web due to the feasibility of
separating content from the content-delivery system.
Recognizing and modeling information and system as-
pects separately may elucidate the process by which
Web-customer satisfaction is formed.
Based on the nature of Web site development for
online shopping and the proposed models by DeLone
and McLean (1992) and Spreng et al. (1996), we posit
that Web-customer satisfaction has two distinctive
sourcessatisfaction with the quality of a Web sites
information content and satisfaction with the Web
sites system performance in delivering information.
Web-customers satisfaction with a Web sites IQ and
SQ is in turn affected by their prior expectations
(formed by their prior experiences and exposure to
vendors marketing efforts), possible discrepancies
(e.g., disconfirmation) between such expectations, and
the perceived performance of the Web site.
This concept is captured in the expectancy-
disconfirmation paradigm, which has been the popularapproach for measuring customer satisfaction in mar-
keting. Based on this paradigm, customer satisfaction
has three main antecedents: expectation, disconfirma-
tion, and perceived performance. When applied to
Web-customer satisfaction, Web-IQ satisfaction has
three antecedents: IQ expectation, IQ disconfirmation,
and IQ-perceived performance. Similarly, Web-SQ sat-
isfaction has three antecedents: SQ expectation, SQ dis-
confirmation, and SQ-perceived performance. Figure 1
shows the EDEWS model, which is the conceptual mo-
tivation for identifying the key constructs in studying
Web-customer satisfaction, as discussed below.
Satisfaction. Based on Spreng et al. (1996), Cadotte et
al. (1987), and Oliver (1980), we define overall satisfac-
tion as an affective state representing an emotional re-
action to the entire Web site search experience. This
definition focuses on the process evaluation associated
with the purchase behavior as opposed to the
outcome-oriented approach, which emphasizes the
-
8/3/2019 The Measurement of Web-Customer
4/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 299
buyers cognitive state resulting from the consumption
experience. IQ satisfaction and SQ satisfaction in this
study have an evaluative nature similar to that of over-all satisfaction. Furthermore, following DeLone and
McLean (1992), we define Web IQ as the customers
perception of the quality of information presented on
a Web site and Web SQ as the customers perception
of a Web sites performance in information retrieval
and delivery.
In extending the Delone and McLean model by ad-
dressing issues related to the relevance, timeliness, and
accuracy of information, Seddon (1997) also empha-
sized the importance of IQ and SQ in perceived use-
fulness and user satisfaction. The distinction between
IQ satisfaction and SQ satisfaction is useful in devel-oping a business Web site and for gauging customers
satisfaction with it. For example, customers dissatisfied
with site retrieval and delivery mechanisms (such as
cluttered pages) are likely to leave the site even if the
information available on the Web site is of high qual-
ity. Conversely, if a Web site lacks the information that
customers need, its entertaining design or ease of
search will not keep customers from leaving the site.
Therefore, the distinction between IQ and SQ pertain-
ing to customer satisfaction has practical implications
for the Web-design process.
Expectation. When consumers consider buying aproduct, they utilize prior purchasing experiences or
external information to form internal standards of
comparison, which are used in forming their expecta-
tions (Olson and Dover 1979, Oliver 1980). Expectation
is conceptualized as the aggregation of individual be-
lief elements in a consumers cognitive structure (Olson
and Dover 1979), and is a precursor in predicting a
variety of phenomena involved in buying behaviors
and subsequent perceptions.
There has been a lack of consensus regarding the
conceptual definition of the expectation construct in
the expectancy-disconfirmation and SERVQUAL lit-
erature. In the debate over the validity of expectation
measurement in SERVQUAL (Van Dyke et al. 1997,
Pitt et al. 1997, Kettinger and Lee 1997), Van Dyke et
al. observed that expectation lacks a concise conceptual
definition because of its multiple definitions and cor-
responding operationalizations. For example, three
types of expectation have been suggested: the should
expectation, the ideal expectation, and the will ex-
pectation (Teas 1993, Boulding et al. 1993, Tse and
Wilton 1988). The should expectation highlights anormative standard for performance whereas the
ideal expectation characterizes the optimal perfor-
mance. The will expectation focuses on predicting
future performance.
Following the conceptual definition by Olson and
Dover (1979), we define customer expectation as their
pretrial beliefs about a product (a Web site in the
current study). Our definition of expectation is in line
with the will expectation suggested by Teas (1993)
and with Szajna and Scamells (1993) conceptualiza-
tion of expectation as a set of beliefs held by IS users
about a systems performance and their own perfor-mance when using the system. Furthermore, it also
corresponds with Spreng et al.s (1996) definition of
expectation as beliefs about a products attributes or
performance at some time in the future.
Perceived Performance. Perceived performance is
defined as customers perception of how product
performance fulfills their needs, wants, and desires
(Cadotte et al. 1987). The general role of this con-
struct in the expectation-disconfirmation paradigm
has been a standard of comparison included in the
disconfirmation of expectations. In this respect, em-
pirical research has attempted to investigate the im-pact of perceived performance on satisfaction di-
rectly (Churchill and Surprenant 1982, LaTour 1979)
or as mediated by disconfirmation (Cadotte et al.
1987, Churchill and Surprenant 1982, Churchill 1979,
Oliver 1980, Swan and Trawick 1980).
Disconfirmation. Disconfirmation is defined as con-
sumer subjective judgments resulting from comparing
their expectations and their perceptions of perfor-
mance received. This definition is similar to the con-
cept of expectation congruency suggested by Spreng
et al. (1996). Specifically, once consumers form their
expectations, they compare their perceptions of prod-
uct performance (based on their purchasing experi-
ences) to the pre-established levels of expectation. Dis-
confirmation occurs when consumer evaluations of
product performance are different from their pretrial
expectations about the product (Olson and Dover
1979).
Conceptually, there has been a debate regarding
-
8/3/2019 The Measurement of Web-Customer
5/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
300 Vol. 13, No. 3, September 2002
how to measure the disconfirmation construct. There
are two main approaches: (i) to compute disconfir-
mation by subtracting expectation from perceived per-formance or (ii) to measure disconfirmation directly as
an independent construct of the perceived gap. Ad-
vocating the subtractive approach, Pitt et al. (1997) ar-
gue for including expectations when studying quality
issues and not relying solely on a perception measure-
ment so that richer diagnostic information can be ob-
tained. Likewise, Swan and Trawick (1980) introduce
the subtractive disconfirmation approach, based on
comparison theory.
However, in the literature of SERVQUAL, Van Dyke
et al. (1997) advocate the direct measurement of ones
perception of service quality with a disconfirmationmeasurement. Furthermore, several studies in market-
ing use the subjective disconfirmation approach, con-
sidering disconfirmation as an independent construct
that influences consumer satisfaction (Oliver 1977,
1980, Churchill and Surprenant 1982, Spreng et al.
1996, Cronin, Jr. and Taylor 1992). We opted for the
direct measurement of disconfirmation because it has
been the more established approach in the expectation-
disconfirmation paradigm.
Salient Dimensions of Information and System
Quality
In empirical studies examining expectation-disconfir-mation constructs and models in marketing, the can-
didate products salient attributes are easily identifi-
able and directly measurable. For example, in setting
up their experiments, Churchill and Surprenant (1982)
used a plant and a videodisk player. They chose the
number of blossoms and plant size as the impor-
tant features of the plant and focus and hum as
the important features for the videodisk player. Simi-
larly, Spreng et al. (1996) used versatility and video
outcome as two salient features of a camcorder. How-
ever, in measuring IQ and SQ expectation, perfor-
mance, and disconfirmation, salient dimensions of IQ
and SQ are not pre-established, nor are such dimen-
sions directly measurable. Therefore, the salient di-
mensions of Web IQ and Web SQ should be identified
and measured as latent variables. The salient dimen-
sions can then be used to construct second-order fac-
tors to represent IQ and SQ expectation, performance,
and disconfirmation.
Higher order factors have been used in measuring
complex constructs. For example, Segars and Grover
(1998) developed a second-order factor for measuringthe success of strategic planning. Doll et al. (1994) de-
veloped a second-order factor to measure end-user
computing satisfaction as a multifaceted construct.
To measure the second-order constructs, we devised
a two-phase process for instrument development. The
objective of the first phase was to identify the salient
dimensions of Web IQ and Web SQ. In the second
phase, the instrument for construct measurement was
developed and the measurement model was tested us-
ing controlled lab experiments in which IQ and SQ ex-
pectations of participants were manipulated for con-
trolling and measuring the levels of expectation anddisconfirmation and their impacts on Web-customer
satisfaction. This section reports on the results of the
first phase.
Phase 1 required the identification of factors consid-
ered important by Web customers in judging the IQ
and SQ of Web sites. A number of researchers have
examined various factors determining Web IQ, but a
standard measure has not emerged. After an extensive
review of the literature, we identified five IQ dimen-
sions: (1) relevance, (2) timeliness, (3) reliability, (4)
scope, and (5) perceived usefulness (Table 1), and four
SQ dimensions: (1) access, (2) usability, (3) navigation,and (4) interactivity (Table 2). The literature search
contributed to the content validity of the constructs to
be measured.
Methods
Construct Validation
To create instruments to measure the constructs of
Web IQ and Web SQ, we began the instrument devel-
opment process with previously tested instruments
(Zmud and Boynton 1991, Bailey and Pearson 1983),
which has been designated as an efficient practice for
IS researchers (Boudreau et al. 2001). The draft instru-
ments used an 11-point semantic differential scale with
values ranging from 0 (not important at all) to 10 (ex-
tremely important). In accordance with Churchills
(1979) general principles for construct development, a
draft 42-item instrument was created (33 items as
-
8/3/2019 The Measurement of Web-Customer
6/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 301
Table 1 First-Order Factors and Subscales for Web-Information Quality
First-Order Factors Definition Supporting Literature
Relevance
Subscales for Relevance
Concerned with such issues as relevancy, clearness, and
goodness of the information
Applicable, Related, Clear
Bailey and Pearson 1983, Bruce 1998, Davis et al. 1989,
Doll and Torkzadeh 1988, Eighmey 1997, Eighmey and
McCord 1998, Saracevic et al. 1988, Seddon 1997,
Wilkerson et al. 1997, Zmud 1978
Timeliness
Subscales for Timeliness
Concerned with the currency of the information
Current, Continuously Updated
Abels et al. 1997, Bailey and Pearson 1983, Doll and
Torkzadeh 1988, King and Epstein 1983, Wilkerson et al.
1997, Zmud 1978
Reliability
Subscales for Reliability
Concerned with the degree of accuracy, dependability,
and consistency of the information
Believable, Accurate, Consistent
Bailey and Pearson 1983, Doll and Torkzadeh 1988,
Eighmey 1997, Eighmey and McCord 1998, King and
Epstein 1983, Wilkerson 1997, Zmud 1978
Scope
Subscales for Scope
Evaluates the extent of information, range of information
and level of detail provided by the Web site. This new
dimension of information quality, similar to a library
search, is needed for Web site evaluation.
Sufficient, Complete, Covers a Wide Range, Detailed
Bailey and Pearson 1983, Doll and Torkzadeh 1988, King
and Epstein 1983, Schubert and Selz 1998, Wilkerson et
al. 1997, Zmud, 1978
Perceived Usefulness
Subscales for
Perceived Usefulness
Users assessment of the likelihood that the information
will enhance their purchasing decision
Informative, Valuable, Instrumental
Abels et al. 1997, Bailey and Pearson 1983, Davis et al.
1989, Seddon 1997, Doll et al. 1998, Eighmey 1997,
Eighmey and McCord 1998, Larcker and Lessig 1980,
Moore and Benbasat 1991, Venkatesh and Davis 1996,
Venkatesh and Davis 2000
Table 2 First-Order Factors and Subscales for Web-System Quality
First-Order Factors Definition Supporting Literature
Access
Subscales for Access
Refers to the speed of access and the availability of the
Web site at all times
Responsive, Loads Quickly
Bailey and Pearson 1983, Novak et al. 2000, Selz and
Schubert 1998, Wilkerson et al. 1997
Usability
Subscales for Usability
Concerned with the extent to which the Web site is
visually appealing, consistent, fun and easy to use
Simple Layout, Easy to Use, Well Organized, Visually
Attractive, Fun, Clear Design
Abels et al. 1997, Bailey and Pearson 1983, Davis 1989,
Doll et al. 1998, Doll and Torkzadeh 1988, Doll et al.
1994, Dumas and Redish 1993, Eighmey 1997, 1993,
Nielsen 1993, Moore and Benbasat 1991, Schubert and
Selz 1998, Selz and Schubert 1998, Eighmey and McCord
1998, Venkatesh and Davis 1996, Wilkinson et al. 1997,
Zmud 1978
Navigation
Subscales for Navigation
Evaluates the links to needed information
Adequate Links, Clear Description for Links, Easy to
Locate, Easy to Go Back and Forth, a Few Clicks
Abels at al. 1997, Wilkinson et al. 1997
Interactivity
Subscales for Interactivity
Evaluates the search engine and the personal design, i.e.,
the shopping cart feature, of the Web site
Customized Product, Search Engine, Create List of
Items, Change List of Items, Find Related Items
Abels et al. 1997, Eighmey 1997, Eighmey and McCord
1998, Selz and Schubert 1998, Wilkinson et al. 1997
-
8/3/2019 The Measurement of Web-Customer
7/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
302 Vol. 13, No. 3, September 2002
shown in Tables 1 and 2, plus one direct question for
each construct).
The participants in the measurement process wereundergraduate and graduate students at a large met-
ropolitan university. Characterizing Web users as
highly educated (88% had at least some college expe-
rience) with an average age of 38 years (average age
decreased with the increase in number of years on the
Internet and level of skill), the GUV WWW survey
(1998) described generic Web users with educational
profiles similar to those of our participants. Web users
in the GUV WWW survey (1998) were also quite ex-
perienced with the Internet74% had more than one
year experience with the Internet. The participants in
this study had an average age of about 27 years, andmore than 80% had more than two years experience
with the Internet.
Initially, 10 Internet customers and experts reviewed
the instrument for the purpose of evaluating it for face
and content validity. The comments collected from the
respondents did not indicate any problems. As rec-
ommended by a respondent, two versions of the in-
strument were created and used to avoid order bias.
The first pilot test was performed based on a conve-
nience sample of 47 usable responses. An examination
of the factor analysis results showed the existence of
additional factors, leading to the addition of under-standability and adequacy as two more IQ dimen-
sions, and entertainment as an additional SQ dimen-
sion. Furthermore, the easy to locate item had a very
low loading. Because its meaning did not correspond
with the concept of navigation (the factor it was in-
tended to measure), it was dropped at this stage. Six
new items and three general questions (one per new
construct) were added. The changes resulted in a pu-
rified instrument (Churchill 1979) with 50 items for
measuring IQ and SQ dimensions and their impor-
tance. A second pilot test was performed to test the
modified instrument based on another convenience
sample of 47 usable responses. Examination of the
findings found the instrument to be reliable with no
major bias.
The twice-piloted instrument was used for data col-
lection on Web-IQ and Web-SQ dimensions in the first
phase of the study. Data were collected in two rounds,
yielding 330 usable responses in the first round and
238 in the second round, a total of 568 observations.
There were no overlaps between the subjects in the two
rounds. Examination of the t-test results gave no in-dication of item order bias.In the analysis of the IQ dimensions, the construct
for timeliness showed extensive cross-loadings with
the reliability factor. It seems that Web customers view
out-of-date information as unreliable for making pur-
chase decisions. Therefore, for further purification(Churchill 1979), this factor was dropped. One item forusefulness (instrumental) was dropped due to its
low factor loading in measuring perceived usefulness.The results indicated six factors for Web IQ (Table 3).The high factor loadings indicate convergent validity,
and the lack of noticeable cross loadings supports dis-criminant validity of the reported factors for Web IQ.
The mean importance rating is the average of subjectratings of the items comprising each factor and thusindicates the strength of conviction that the subjectshad concerning the importance of the construct. The
last row of Table 3 reports the mean importance rating,which is used in the second phase of the study for se-lecting the salient dimensions.
The factor analysis results for Web SQ indicated that
the navigation factor should be divided into (internal)navigation and (external) hyperlinks, which is quite
meaningful in the context of information search for on-line shopping. Due to low factor loading in measuring
interactivity, the search engine item was removedfrom the analysis. Table 4 reports the results of thefactor analysis of Web-SQ dimensions. Again, the highfactor loadings for the reported factors and the absence
of significant cross-loadings support the convergentvalidity and discriminant validity of the proposedfactors.
Appendix A includes the instrument and Cronbachalphas for the factors reported in Tables 3 and 4. The
alpha values for all factors in Web IQ exceed 0.85. Highreliability was also present for usability, entertain-ment, hyperlinks, and interactivity. However, theCronbach alpha was 0.51 for access and 0.68 for navi-
gation. Whereas the 0.68 value might be acceptable inexploratory research (Nunnally 1967), the same cannot
be said of the 0.51 value. But because usefulness in IQand navigation, access, and hyperlinks in SQ are two-
item factors, the interitem correlation can be used asan appropriate check for these factors.
-
8/3/2019 The Measurement of Web-Customer
8/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 303
Table 3 Factor Analysis for Information Quality (N 568)
Constructs Manifest Variables Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6
Relevance (3) Applicable 0.165 0.037 0.815 0.213 0.109 0.140
Related 0.126 0.183 0.815 0.123 0.173 0.159
Pertinent 0.188 0.112 0.785 0.204 0.196 0.157
Understandability (3) Clear in Meaning 0.349 0.045 0.290 0.703 0.174 0.250
Easy to Understand 0.204 0.106 0.180 0.857 0.197 0.113
Easy to Read 0.168 0.171 0.171 0.823 0.122 0.123
Reliability (3) Trustworthy 0.843 0.070 0.174 0.225 0.201 0.133
Accurate 0.863 0.037 0.186 0.228 0.189 0.129
Credible 0.856 0.121 0.136 0.156 0.199 0.110
Adequacy (3) Sufficient 0.259 0.207 0.175 0.086 0.760 0.164
Complete 0.308 0.151 0.272 0.288 0.659 0.238
Necessary Topics 0.194 0.299 0.181 0.253 0.699 0.183
Scope (3) Wide Range 0.129 0.854 0.128 0.104 0.173 0.081
Wide Variety of Topics 0.076 0.924 0.099 0.127 0.141 0.085
# of Different Subjects 0.004 0.878 0.073 0.061 0.146 0.173
Usefulness (2) Informative 0.180 0.272 0.246 0.207 0.235 0.799
Valuable 0.227 0.161 0.299 0.249 0.285 0.759
Variance Explained 46.6 12.5 7.9 6.4 5.0 3.8
Mean Importance Rating 8.96 6.91 7.80 8.43 8.12 8.15
Table 4 Factor Analysis for System Quality (N 568)
Constructs Manifest Variables Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6
Access (2) Responsive 0.149 0.204 0.065 0.024 0.041 0.854
Quick Loads 0.084 0.264 0.066 0.023 0.347 0.657
Usability (4) Simple Layout 0.180 0.759 0.025 0.189 0.047 0.161
Easy to Use 0.119 0.791 0.114 0.132 0.134 0.236
Well Organized 0.125 0.767 0.223 0.091 0.208 0.155
Clear Design 0.205 0.691 0.247 0.019 0.308 0.034
Entertainment (3) Visually Attractive 0.221 0.218 0.734 0.112 0.195 0.032
Fun 0.187 0.133 0.888 0.149 0.091 0.053
Interesting 0.138 0.117 0.866 0.193 0.051 0.076
Hyperlinks (2) Adequate # of Links 0.179 0.132 0.278 0.836 0.121 0.022
Clear Description for Links 0.179 0.199 0.151 0.839 0.209 0.027Navigation (2) Easy to Go Back and Forth 0.189 0.202 0.119 0.259 0.741 0.193
A Few Clicks 0.127 0.294 0.170 0.120 0.699 0.126
Interactivity (4) Create List of Items 0.799 0.108 0.174 0.063 0.244 0.020
Change List of Items 0.818 0.114 0.072 0.034 0.306 0.102
Create Customized Product 0.785 0.174 0.209 0.197 0.091 0.117
Select Different Features 0.770 0.212 0.178 0.189 0.003 0.104
Variance Explained 39.0 10.7 9.0 6.4 5.3 4.5
Mean Importance Rating 7.36 8.17 7.14 6.70 8.09 8.40
-
8/3/2019 The Measurement of Web-Customer
9/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
304 Vol. 13, No. 3, September 2002
We also examined interitem correlations for each
factor. These correlations were quite high for Web IQ
and are relatively high for Web SQ. Of the two-itemfactors, usefulness and hyperlinks had relatively high
interitem correlations, whereas access and navigation
exhibit relatively lower correlation. However, all cor-
relations are statistically significant. General questions
showed high correlations with factor items in most
cases. High correlation values among items and with
the general question in each factor indicated support
for the presence of convergent validity.
Construct Measurements, Experimental Design, and
the Measurement Model
In the second phase, the three most important dimen-sions of Web IQ and Web SQ were selected for manip-
ulating expectations and measuring perceived perfor-
mance and disconfirmation. As shown in Appendix A,
the importance ratings of Web-IQ and Web-SQ dimen-
sions were measured by an 11-point semantic differ-
ential scale ranging from 0 (not important at all) to 10
(extremely important). The criterion was to select three
factors with the highest mean importance ratings.
Importance rating measures the importance of each
dimension to subjects. Using such ratings, Brancheau
et al. (1996) identified key research issues of IS man-
agement. In the same context, Wang and Strong (1996)classified attributes of data quality to create a hierar-
chical representation of consumers data quality needs.
Furthermore, in selecting the most important features
of Web IQ and Web SQ based on importance ratings,
we have followed the common practice of selecting the
most important attributes in designing experiments for
testing expectation-confirmation models in marketing.
Therefore, based on this criterion, the three most sa-
lient dimensions of Web IQ were reliability, under-
standability, and usefulness. Similarly, access, usabil-
ity, and navigation were selected as the top three
salient dimensions of Web SQ (Tables 3 and 4).The rationale for using three dimensions was based
on the fact that second-order constructs had to be cre-
ated using these factors. Chin (1998, p. x) suggested
that: To adequately test the convergent validity, the
number of first-order factors should be four or greater
(three while statistically adequate would represent a
just-identified model for congeneric models). Kline
(1998) suggested that for a confirmatory factor analysis
model with a second-order factor to be identified, at
least three first-order factors are needed. On the otherhand, using more than three dimensions would in-
crease the complexity of the measurement model to an
unacceptable level in terms of estimation and sample
size. In most experiments designed for testing
expectation-disconfirmation models, only two salient
attributes have been used (Churchill and Surprenant
1982, Spreng et al. 1996). Hence, using three salient
dimensions for IQ and SQ provides adequate data for
testing the EDEWS measurement model while keeping
the complexity of the experiments at a manageable
level.
The selected three salient dimensions of Web IQ andWeb SQ were used in developing the measurement
model (as shown in Appendix F) as well as in devel-
oping the instruments for measuring expectation, per-
ceived performance, and disconfirmation for Web IQ
and Web SQ, as reported below.
Expectation Measurement. Expectations regarding the
reliability, understandability, and usefulness of Web
sites were measured as first-order factors, which were
used in creating a second-order factor for measuring
the IQ-expectation construct. Similarly, the first-order
factors for measuring expectations regarding access,
usability, and navigation were used to create a second-order factor for measuring the SQ expectation. Mani-
fest variables for expectations were measured using
an 11-point semantic differential scale ranging from
not likely at all to highly likely, as shown in Ap-
pendix B.
Perceived-Performance Measurement. Conceptually,
two different types of definitions for performance con-
struct are possible: perceived or subjective product per-
formance and objective product performance. Because
the expectation-disconfirmation paradigm focuses on
customer-subjective judgments of product perfor-
mance, this study measured perceived performance.
The construct for IQ-perceived performance was mea-
sured as a second-order factor using the first-order fac-
tors for perceived performance regarding reliability,
understandability, and usefulness. Similarly, the
second-order construct for SQ-perceived performance
was measured using the SQ first-order dimensions of
access, usability, and navigation. Manifest variables
-
8/3/2019 The Measurement of Web-Customer
10/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 305
for perceived performance were measured using an 11-point semantic differential scale ranging from very
poor to very good, as shown in Appendix C.Disconfirmation Measurement. This study follows thesubjective disconfirmation approach by measuring dis-confirmation directly. The first- and second-order fac-tors for IQ and SQ disconfirmation were created in a
similar manner to the factors used for measuring IQand SQ expectation and perceived performance. Theinstrument for disconfirmation measurement directlyevaluates disconfirmation as an independent con-
struct, with an 11-point semantic differential scaleranging from 0 (much lower than you thought) to 10(much higher than you thought) with 5 as the neutral
midpoint (the same as you expected), as shown in Ap-pendix D. Positive disconfirmation is measured byscale values above 5 (5 to 10); negative disconfirma-tion is measured by scale values below 5 (0 to 5); and5 represents 0 disconfirmation.
Satisfaction Measurement. Using a single-item mea-sure, Westbrook (1980) measured consumer satisfac-tion on a delightful-terrible scale, measuring consumerfocus on degree of delight experienced in consuming
a cognitively fulfilling product. On the other hand,Churchill and Surprenant (1982), following Pfaffs(1977) approach, described overall satisfaction withcognitive and affective models and used multi-itemmeasures of belief and affect for the assessment of sat-isfaction. Similarly, Spreng et al. (1996) based their def-inition of satisfaction on a summary evaluation of theentire product-use experience and developed fourscales using cognitive and affective components to de-scribe satisfaction.
We developed the measurements of Web-IQ satis-faction and Web-SQ satisfaction as well as overallWeb-user satisfaction based on the published instru-ments with Cronbach alpha values greater than 0.96(Oliver 1989, Spreng et al. 1996).1 Using 4, 11-point se-mantic differential scales, Spreng et al. (1996) mea-sured satisfaction with 4 scales. As shown in Appendix
1The assumption in adopting this procedure is that high alphas rep-
resent reliable scales. However, it is possible that high alphas (Straub
et al. 2000b) could result from common methods bias (Cook and
Campbell 1979). It is important to assess whether the instrumenta-
tion process used maximally different methods to examine different
variable types. In this case, high alphas would represent more reli-
able scales.
E, we adopted these scales to measure satisfaction with
IQ, SQ, and overall satisfaction and added two items
to elicit overall satisfaction with a Web site throughthe intention of reuse and recommendation to others.
Experimental Design. The experiment was a 4 4
factorial design, intended to estimate the EDEWS mea-
surement model via confirmatory factor analysis. A to-
tal of 16 cells were created for this study4 actual
combinations of IQ and SQ levels by 4 manipulating
expectations. Churchill and Surprenant (1982) used a
similar factorial design by setting up three perfor-
mance categories for plants and videodisk players and
used credible printed messages for manipulating sub-
ject expectations in the three performance categories,
hence producing a total of nine cells for each product.Such manipulations are needed to create a common
standard of comparison and to control the levels of
expectation. In this study, four Internet travel agent
Web sites were selected to fit the high-high, high-low,
low-high, and low-low levels of the Web-IQ and Web-
SQ constructs. Web site selection was based on ratings
of Internet travel agents by PC World, ComputerWorld,
and Gomez.com.2 The authors evaluated, categorized,
and synthesized the quality dimensions and rating in-
formation provided by these sources and used the re-
sults to create rating reports in implementing the ex-
perimental design. High IQ and high SQ indicate thatthe chosen Web site possesses a high level of IQ and
SQ in terms of the three selected salient dimensions.
The experimental protocol required the manipula-
tion of subject expectations by setting their expecta-
tions to high-high, high-low, low-high, and low-low
for IQ and SQ. Expectations were manipulated at the
start of the experiment by showing the subjects the rat-
ing reports with credible rating information for each
salient dimension, along with descriptions regarding
the assigned Web site. To ensure the experiments ob-
jective of setting expectations, the researchers created
one true and three mock rating reports for each Web
2PC World, ComputerWorld, and Gomez.com compared 9, 6, and 22
online travel agents, respectively. PC World (using a five-point scale:
excellent, very good, good, fair, and poor) and ComputerWorld(using
scores of A to F) compared these agents Web sites on various at-
tributes of information and system features. Gomez.com rated Web
sites using an 11-point scale (0 to 10) for various criteria and the
resulting overall score.
-
8/3/2019 The Measurement of Web-Customer
11/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
306 Vol. 13, No. 3, September 2002
site. For example, for the travel Web site with high IQ
and high SQ, one rating report had true high-high rat-
ings and three mock rating reports indicating high-low, low-high, and low-low levels for IQ and SQ.
Subjects were randomly assigned to 1 of the 16 cells.
Subjects participated in an information-search exper-
iment, requiring them to purchase an airline ticket
over the Web. Usable data was collected from 312
subjects. During the experiment session, each subject
received a rating report for a Web site to review.
Based on their review of the information, subjects
completed a questionnaire (Appendix B) designed to
measure their IQ and SQ expectations. The question-
naire also collected demographic information about
the subjects and their Web experience. Upon com-pleting the questionnaire, subjects searched the as-
signed Web sites for 20 minutes. Following the search
period, subjects completed a second questionnaire de-
signed to measure their perceived performance, dis-
confirmation as well as Web-IQ and Web-SQ satisfac-
tion. Overall satisfaction was also measured at this
time. Appendices CE report the instruments used in
developing the above questionnaires.
Analysis and ResultsStructural equations modeling (SEM) was the desig-
nated tool for estimating the EDEWS measurement
model for the confirmatory factor analysis; we used the
most recent software for such an analysis, Mplus (de-
veloped by Muthen and Muthen 2001 and based on
Muthen 1984). The estimation algorithm was the
mean-adjusted maximum likelihood, which adjusts
the estimation results with respect to nonnormality in
the data.
The internal validity of the experiments was tested
by manipulation checks for verifying that the manip-
ulations had taken (Perdue and Summers 1986). Ma-
nipulation checks are intended to measure the extent
to which treatments have been perceived by the sub-
jects (Boudreau et al. 2001, p. 5). The manipulations
of expectation were investigated to confirm whether
different levels of expectations were successfully set.
We estimated two logistic regressions with IQ and SQ
manipulations as the dependent (categorical) variables
and the factor scores for IQ expectation and SQ expec-
tation as independent variables. The coefficients of theestimated functions were significant with p value of
0.000 and tests of the significance of estimated func-
tions had p values of 0.000. The results were further
confirmed by additional analysis using ANOVA, in
which the F statistics had p values below 0.000. These
findings indicate the successful manipulation of the
participants expectations.
We estimated the measurement model, containing
the confirmatory factor analysis for the constructs. The
normed chi-square (ratio of chi-square and the degrees
of freedom) for the measurement model was two, be-
low the recommended range of three.The t values (estimated factor loadings divided by
their standard errors) for the loadings of manifest vari-
ables were very high and well above two, supporting
the statistical significance of the parameter estimations
(Muthen and Muthen 2001, p. 74). The t values of the
factor loadings in the measurement model ranged
from 18 to 119, indicating strong convergent and dis-
criminant validity. Furthermore, the high squared
multiple correlations (R2 values) for the indicators sup-
port the assertion that indicators are good measures
of the constructs (Gefen et al. 2000, Bollen 1989, p. 288).
(Appendix F contains details on the measurementmodel, confirmatory factor loadings, R2 and t values.)
Although there is a debate regarding the use of the
multitrait-multimethod analysis (MTMM) (Alwin
1973, 1974; Bagozzi et al. 1991; Bagozzi and Yi 1991),
we used MTMM to further examine the convergent
and discriminant validity of the factors (Campbell and
Fiske 1959, Straub 1989). Although no clear criteria ex-
ists as to what makes methods maximally dissimilar
(Pedhazur and Schmelkin 1991), we applied the ap-
proach used by Davis (1989) to argue that an item, say
clear in meaning, used for measuring the expectation
of understandability (in Appendix B) should be differ-
ent from the clear in meaning item used for measur-
ing the perceived performance of understandability (in
Appendix C) or the disconfirmation of understand-
ability (in Appendix D). In this sense, the clear in
meaning item may be said to be a method used for
measuring different traits: expectation, perceived per-
formance, and disconfirmation of understandability,
-
8/3/2019 The Measurement of Web-Customer
12/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 307
Table 5 Reliability Measures for Model Constructs
First-Order Factors
Cronbach
Alpha
Composite Factor
Reliability
Average Variance
Extracted (AVE)
e-Understandability 0.95 0.90 0.74
e-Reliability 0.97 0.90 0.75
e-Usefulness 0.95 0.78 0.64
P-Understandability 0.95 0.88 0.71
P-Reliability 0.97 0.89 0.72
P-Usefulness 0.95 0.70 0.52
D-Understandability 0.94 0.88 0.71
D-Reliability 0.95 0.84 0.65
D-Usefulness 0.95 0.78 0.65
E-Access 0.80 0.87 0.60
E-Usability 0.96 0.80 0.67
E-Navigation 0.93 0.75 0.62
P-Access 0.80 0.91 0.74
P-Usability 0.97 0.79 0.66
P-Navigation 0.86 0.75 0.61
D-Access 0.79 0.90 0.71
D-Usability 0.96 0.79 0.66
D-Navigation 0.81 0.77 0.63
Web-information satisfaction 0.97 0.90 0.69
Web-system satisfaction 0.98 0.91 0.70
Overall satisfaction 0.98 0.96 0.84
and should show low correlations across the traits
(heterotrait-monomethod).3
Acknowledging that this definition of method is nota use of maximally different methods, we follow
Davis in arguing that items for measuring expectation
of understandability (as reported in Appendix B) are
different methods measuring the same trait and should
have high correlations with each other (monotrait-
heteromethod). Similar arguments could be made for
the other five IQ and SQ factors. In the satisfaction
case, different satisfaction types (IQ, SQ, and overall)
were used as traits and the common satisfaction items
were used as methods. Thus, seven correlation matri-
ces were created, each corresponding to one of the six
IQ and SQ first-order factors (understandability, reli-ability, usefulness, access, usability, and navigation)
and one for satisfaction. These matrices were examined
for the evidence of convergent and discriminant valid-
ity of expectation, perceived performance, and discon-
firmation constructs.
To investigate the convergent validity, the mono-
trait-heteromethod triangles for each construct were
examined for high values; 100% of the correlations
were significant for the traits, supporting convergent
validity. To examine discriminant validity, each matrix
was analyzed individually, resulting in a total of 1,608
comparisons and 20 violations (a 1.2% exception rate),a rate that meets the discriminant validity criterion set
by Campbell and Fiske (1959).
The evidence for reliability of first-order factors is re-
ported in Table 5. Cronbach alphas were all above 0.79,
with most Cronbach alphas above 0.90. (The interitem
correlations for the two-item factors are also reported in
Appendices BD.) Table 5 shows the composite factor
reliability values for the constructs, which are at or above
the recommended threshold of 0.70 (Segars 1997).
Average variance extracted (AVE) shows the
amount of variance captured by a construct as com-
pared to the variance caused by the measurement error(Fornell and Larcker 1981, Segars 1997). The AVE val-
ues for all measures exceeded the recommended
threshold of 0.50 (Segars 1997), which indicates that the
3Note that Straub et al. (2000b) raise a serious concern about Daviss
(1989) MTMM analysis in this regard. They argue that Daviss meth-
ods were not different and not maximally different as described
and demonstrated in Campbell and Fiske (1959).
constructs captured a relatively high level of variance
(Column 4).
Following Doll et al. (1994) and Segars and Grover
(1998), three first-order factors of understandability,
reliability, and usefulness were used to create second-
order factors for IQ and SQ constructs. Based on Doll
et al. (1994), R2 values for the second-order factors
were computed (Table 6). High R2 values indicate an
acceptable level of reliability for the second-order fac-
tors (Doll et al. 1994, Bollen 1989, Gefen et al. 2000).
Significant factor loadings for the second-order factors
(Appendix F) indicate their validity (Doll et al. 1994).
Implications, Limitations, and FutureDirectionsIn measuring Web-customer satisfaction, a critical task
is to identify key constructs of Web-customer satisfac-
tion and to develop validated instruments to measure
them. Hence, the results of this study have immediate
implications for businesses operating on the Web and
for research in Web-customer satisfaction.
-
8/3/2019 The Measurement of Web-Customer
13/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
308 Vol. 13, No. 3, September 2002
Table 6 R2 Values for Second-Order Factors
Second-Order Factors
First-Order
Factors
Information
Quality
Expectation
Information
Quality
Performance
Information
Quality
Disconfirmation
Understandability 0.59 0.64 0.67
Reliability 0.77 0.82 0.85
Usefulness 0.88 0.97 0.87
System Quality
Expectation
System Quality
Performance
System Quality
Disconfirmation
Access 0.79 0.86 0.86
Usability 0.88 0.81 0.79
Navigation 0.84 0.70 0.78
Note. The R2 values were computed based on Doll et al. (1994) using the
CALIS procedure in SAS.
Implications for Practice. As online shopping becomes
a common practice, the online retailers are increasingly
being held to the same business-performance stan-
dards as businesses operating in traditional markets.
Managers of online retailers need to monitor the sat-
isfaction of customers with their Web sites to compete
in the Internet market. In doing so, they need to rec-ognize the distinctive roles of information content and
Web site performance in retrieving and delivering
product information. This imperative is due to the fact
that customers dissatisfied with Web site information
contents will leave the site without making a purchase.
Similarly, no matter how thorough the information
content of a site is, a customer who has difficulty in
searching and getting the needed information is likely
to leave the site. Therefore, one can add value and cre-
ate insight by examining Web-customer satisfaction
with the information content as well as the system
quality. Having access to reliable and scientifically
tested metrics, the practitioners would be able to ex-
amine the structure and dimensionality of Web-
customer satisfaction. Our proposed metrics for sepa-
rately measuring IQ and SQ constructs can assist
managers in this regard because our analysis distinctly
focuses on both the information contents and the de-
livery of the information.
Furthermore, online customers commonly have re-
peated experiences with various Web sites. Therefore,
gauging their expectations and the disconfirmation oftheir expectations can be of value in analyzing Web-
customer satisfaction. Consequently, online retailers
are able to examine whether their Web sites meet their
customers expectations by examining Web-customers
IQ and SQ expectations and disconfirmation. More-
over, the introduction of expectation and disconfir-
mation constructs brings the marketing aspect of Web
sites into focus for such retailers, an aspect crucial to
the effective design of Web sites for online business.
Implications for Research. Our work paves the way for
researchers to investigate the impact of expectations
and disconfirmation on Web-customer satisfaction byclearly delineating Web-IQ and Web-SQ dimensions.
It shows the complex nature of the constructs and ex-
perimental design for accurately analyzing the process
by which Web-customer satisfaction is formed and for
testing hypotheses regarding relationships among
these constructs. In addition, validated measures could
provide the consensus among researchers of customer
satisfaction and encourage them to develop more re-
fined measurement models (Segars and Grover 1998).
This study provides the needed metrics for initiating
future studies on Web-customer satisfaction.
Limitations. The reported results are obviously lim-ited by the type of subject, the nature of laboratory
experimentation, and choice of Web sites. Using stu-
dents as subjects could have an impact on the results
(Szymanski and Henard 2001). Testing the measure-
ment model with other strata of Web customers will
add to the generalizability of our results. Second, the
nature of lab experiments and the choice of Web sites
limit the reported results. Because the purchase of air-
line tickets is a prevailing practice among Web users,
this study employed Web sites of online travel agents
for experiments. However, Web-customer satisfaction
may depend on the distinctive nature of products or
services offered online. The replication of this study for
other types of products and services can enhance the
generalizability of the reported results.
Directions of Future Research. The results of this study
facilitate further research in analyzing the antecedents
of Web-customer satisfaction. Such an analysis can
provide valuable insight into the process by which
-
8/3/2019 The Measurement of Web-Customer
14/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 309
Web-customer satisfaction is formed and the identifica-tion of factors that could lead to a more satisfying ex-
perience at the information phase of online shopping.For success in e-commerce, the information searchstage must lead to a purchase decision. Because the pres-ent studys focus was on the measurement of Web-customer satisfaction, the proposed constructs did notcontain the purchase intention. However, a comprehen-sive approach is needed to examine the influence of sat-isfaction on purchase intention in the Web context. Fur-thermore, in his model, Seddon (1997) includes the net
benefits of the IS to individuals, organizations, and so-ciety. It would be instructive to examine these benefits inthe context of Web sites, and the role of these factors inthe formation of expectations about Web sites.
ConclusionsIn this study, two perspectives from the user-satis-faction literature in IS and the customer-satisfactionliterature in marketing were synthesized to identifynine key constructs for analyzing Web-customer sat-isfaction. Based on IS literature, we argued that mea-suring Web-customer satisfaction for informationquality and system quality provides insight about acustomers overall satisfaction with a Web site. Bysynthesizing IS and marketing theories related to
customer satisfaction, key constructs are identi-fied for Web-customer satisfaction with a modelfor Expectation-Disconfirmation Effects on Web-Customer Satisfaction (EDEWS), demonstrating therole these constructs play in the formation of over-all Web-customer satisfaction. The EDEWS measure-ment model provided strong support for the reli-ability and validity of the proposed metrics formeasuring the key constructs of Web-customersatisfaction.
AcknowledgmentsThe authorsthank Fred Davis,the Guest Editor, theAssociateEditor,
and reviewers for their helpful comments on this paper.
Appendix A. Information Quality and SystemQuality Measurement Scales andReliabilities (Phase 1)
All items were measured on a continuous 11-point semantic differ-
ential scale, where 0 not important at all, and 10 extremely
important. (Each construct has a general question that is reported
here but is not used in computing the Cronbach alpha. Additionally,
interitem correlations are reported for two-item factors.)
Information Quality
Relevance: (Cronbach 0.85)
Information that is applicable to your purchase decision is:
Information that is related to your purchase decision is:Information that is pertinent to your purchase decision is:
In general, information that is relevant to your purchase decision
is:
Understandability: (Cronbach 0.88)
Information that is clear in meaning is:
Information that is easy to comprehend is:
Information that is easy to read is:
In general, information that is understandable for you in making
purchase decision is:
Reliability: (Cronbach 0.92)
Information that is trustworthy is:
Information that is accurate is:
Information that is credible is:In general, information that is reliable for making your purchase
decision is:
Adequacy: (Cronbach 0.82)
Information that is sufficient for your purchase decision is:
Information that is complete for your purchase decision is:
Information that contains necessary topics for your purchase de-
cision is:
In general, information that is adequate for your purchase decision
is:
Scope: (Cronbach 0.91)
Information that covers a wide range is:
Information that contains a wide variety of topics is:
Information that contains a number of different subjects is:
In general, information that covers a broad scope for your purchase
decision is:
Usefulness: (Cronbach 0.88, interitem correlation 0.78)
Information that is informative to your purchase decision is:
Information that is valuable to your purchase decision is:
In general, information that is useful in your purchase decision is:
System Quality
Access: (Cronbach 0.57, interitem correlation 0.40)
A Web site that is responsive to your request is:
A Web site that quickly loads all the text and graphics is:
In general, a Web site that provides good access is:
Usability: (Cronbach 0.84)
A Web site that has a simple layout for its contents is:
A Web site that is easy to use is:
A Web site that is well organized is:
A Web site that has a clear design is:
In general, a Web site that is user-friendly is:
Entertainment: (Cronbach 0.87)
A Web site that is visually attractive is:
A Web site that is fun to navigate is:
A Web site that is interesting to navigate is:
In general, a Web site that is entertaining is:
-
8/3/2019 The Measurement of Web-Customer
15/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
310 Vol. 13, No. 3, September 2002
Hyperlinks4: (Cronbach 0.83, interitem correlation 0.70)
A Web site that has an adequate number of links is:
A Web site that has clear descriptions for each link is:
Navigation: (Cronbach 0.68, interitem correlation 0.52)
A Web site, on which it is easy to go back and forth between pages,
is:
A Web site that provides a few clicks to locate information is:
In general, a Web site, on which it is easy to navigate, is:
Interactivity: (Cronbach 0.87)
A Web site that provides the capability to create a list of selected items
(such as shopping cart) is:
A Web site that provides the capability to change items from a created
list (such as changing contents of a shopping cart) is:
A Web site that provides the capability to create a customized product
(such as computer configurationor creatingclothes to your taste and
measurements) is:
A Web site that provides the capability to select different features ofthe product to match your needs is:
In general, a Web site, on which one can actively participate in
creating your desired product, is:
Appendix B. Measurement Scales for ExpectationsAll items were measured on a continuous 11-point semantic differ-
ential scale, where 0 not likely at all and 10 highly likely. (The
Cronbach alpha is reported for each factor. Additionally, the inter-
item correlations are reported for two-item factors.)
Expectation About Information Quality
Based on the reports provided to you about the Web site, do you
expect information on the Web site to be:
Understandability: (Cronbach 0.95) clear in meaning
easy to comprehend
easy to read
In general, understandable for you in making your purchase
decision
Reliability: (Cronbach 0.97)
trustworthy
accurate
credible
In general, reliable for making your purchase decision
Usefulness: (Cronbach 0.95, interitem correlation
0.90)
informative to your purchase decision valuable to making your purchase decision
In general, useful in your purchase decision
Expectation About System Quality
Based on the reports provided to you about the Web site, do you
expect that the Web site:
4Since this factor was created after the completion of data collection,
it does not have a general question.
Access: (Cronbach 0.80, interitem correlation 0.67)
is responsive to your request
quickly loads all the text and graphics
In general, provides good access
Usability: (Cronbach 0.96)
has a simple layout for its contents
is easy to use
is well organized
is a clear design
In general, is user-friendly
Navigation: (Cronbach 0.93, interitem correlation 0.86)
is easy to go back and forth between pages
provides a few clicks to locate information
In general, is easy to navigate
Appendix C. Measurement Scales for PerceivedPerformance
All items were measured on a continuous 11-point semantic differ-
ential scale, where 0 very poor, and 10 very good. (The Cron-
bach alpha is reported for each factor. Additionally, the interitem
correlations are reported for two-item factors.)
Performance in Information Quality
Based on your experience of using the given Web site, please pro-
vide your evaluation of its performance in terms of the following
features. The Web sites performance in providing information that
is:
Understandability: (Cronbach 0.95)
clear in meaning was
easy to comprehend was
easy to read was
In general, understandable for you in making your purchase de-
cision was
Reliability: (Cronbach 0.97)
trustworthy was
accurate was
credible was
In general, reliable for making your purchase decision was
Usefulness: (Cronbach 0.95, interitem correlation 0.90)
informative to your purchase decision was
valuable to making your purchase decision was
In general, useful in your purchase decision was
Performance in System QualityThe Web sites performance that:
Access: (Cronbach 0.80, interitem correlation 0.66)
is responsive to your request was
quickly loads all the text and graphics was
In general, provides good access was
Usability: (Cronbach 0.97)
has a simple layout for its contents was
is easy to use was
-
8/3/2019 The Measurement of Web-Customer
16/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 311
is well organized was
is a clear design was
In general, is user-friendly was
Navigation: (Cronbach 0.86, interitem correlation 0.75)
is easy to go back and forth between pages was provides a few clicks to locate information was
In general, is easy to navigate was
Appendix D. Measurement Scales forDisconfirmation
All items were measured on a continuous 11-point semantic differ-
ential scale, where 0 much lower than you thought, 5 the same
as you expected, and 10 much higher than you thought. (The
Cronbach alpha is reported for each factor. Additionally, the inter-
item correlations are reported for two-item factors.)
Disconfirmation in Information Quality
We are interested in knowing how the Web site performed com-
pared to your expectations in terms of the following features. The
Web sites performance in providing information is:
Understandability: (Cronbach 0.94)
clear in meaning was
easy to comprehend was
easy to read was
In general, understandable for you in making your purchase de-
cision was
Figure F1 The Structure of the Measurement Model
-
8/3/2019 The Measurement of Web-Customer
17/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
312 Vol. 13, No. 3, September 2002
Table F1 Factor Loading, T value, and R2 for First-Order Factors in the Measurement Model
Expectation Performance Disconfirmation
Items Loading T value R2 Loading T value R2 Loading T value R2
Understandability E-understandability P-understandability D-understandability
Clear in meaning (underst-i1) 1.00 0.0 0.88 1.00 0.0 0.89 1.00 0.0 0.85
Easy to understand (underst-i2) 0.98 45.5 0.92 1.02 43.0 0.94 1.02 40.8 0.89
Easy to read (underst-i3) 0.90 36.2 0.78 0.96 36.8 0.80 0.98 30.9 0.80
Reliability E-reliability P-reliability D-reliability
Trustworthy (reliab-i1) 1.00 0.0 0.87 1.00 0.0 0.88 1.00 0.00 0.83
Accurate (reliab-i2) 1.03 50.7 0.93 1.07 53.5 0.93 1.07 40.4 0.90
Credible (reliab-i3) 1.07 51.0 0.95 1.08 55.1 0.95 0.97 27.1 0.84
Usefulness E-usefulness P-usefulness D-usefulness
Informative (useful-i1) 1.00 0.0 0.89 1.00 0.0 0.87 1.00 0.0 0.88
Valuable (useful-i2) 1.03 38.5 0.90 1.01 51.3 0.94 1.01 43.1 0.93
Access E-access P-access D-access
Responsive (access-i1) 1.00 0.0 0.56 1.00 0.0 0.74 1.00 0.0 0.73
Quick loads (access-i2) 1.26 17.6 0.79 0.90 19.8 0.59 0.89 18.1 0.58
Usability E-usability P-usability D-usability
Simple layout (usability-i1) 1.00 0.0 0.73 1.00 0.0 0.79 1.00 0.0 0.79
Easy to use (usability-i2) 1.19 33.1 0.87 1.15 32.6 0.91 1.09 29.6 0.84
Well organized (usability-i3) 1.26 31.8 0.93 1.15 33.0 0.96 1.11 38.2 0.90
Clear design (usability-i4) 1.24 37.6 0.93 1.08 31.5 0.92 1.06 34.9 0.87
Navigation E-navigation P-navigation D-navigation
Easy to go back and forth (navigation-i1) 1.00 0.0 0.84 1.00 0.0 0.63 1.00 0.0 0.72
A few clicks (navigation-i2) 1.03 37.5 0.88 1.13 20.7 0.91 1.03 19.7 0.65
Table F2 Factor Loadings, T values, and R2
for Satisfaction Factorsin the Measurement Model
Items for Factor Loadings T value R2
Information-Quality Satisfaction
Satisfied (Inf-sat-i1) 1.00 0.0 0.93
Pleased (Inf-sat-i2) 0.97 57.0 0.95
Contented (Inf-sat-i3) 0.96 34.0 0.80
Delighted (Inf-sat-i4) 0.99 43.2 0.88
Information-System Satisfaction
Satisfied (Sys-sat-i1) 1.00 0.0 0.96
Pleased (Sys-sat-i2) 0.99 99.2 0.97
Contented (Sys-sat-i3) 0.93 37.9 0.83Delighted (Sys-sat-i4) 0.95 49.6 0.89
Overall Satisfaction
Satisfied (Sat-i1) 1.00 0.0 0.94
Pleased (Sat-i2) 0.97 46.7 0.95
Contented (Sat-i3) 0.94 38.0 0.87
Delighted (Sat-i4) 0.94 39.1 0.92
Will recommend to friends (Sat-i5) 1.06 40.6 0.89
Will use the site again (Sat-i6) 1.09 34.0 0.83
Table F3 Factor Loadings (T values) for Second-Order Factors in
the Measurement Model
First-Order Factors
Used to Construct
Second-Order Factors Expectation Performance Disconfirmation
Information Quality
E-Information
Quality
P-Information
Quality
D-Information
QualityUnderstandability 1.00 (0.0) 1.00 (0.0) 1.00 (0.0)
Reliability 1.18 (21.7) 1.03 (21.7) 0.98 (19.6)
Usefulness 1.17 (21.8) 1.28 (27.1) 1.21 (21.9)
System Quality
E-System
Quality
P-System
Quality
D-System
Quality
Access 1.00 (0.0) 1.00 (0.0) 1.00 (0.0)
Usability 1.14 (13.6) 1.00 (23.7) 0.94 (21.2)
Navigation 1.26 (15.1) 0.92 (16.1) 0.83 (16.9)
-
8/3/2019 The Measurement of Web-Customer
18/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol. 13, No. 3, September 2002 313
Reliability: (Cronbach 0.95)
trustworthy was
accurate was
credible was In general, reliable for making your purchase decision was
Usefulness: (Cronbach 0.95, interitem correlation 0.90)
informative to your purchase decision was
valuable to making your purchase decision was
In general, useful in your purchase decision was
Disconfirmation in System Quality
The performance of the Web site that:
Access: (Cronbach 0.79, interitem correlation 0.65)
is responsive to your request was
quickly loads all the text and graphics was
In general, provides good access was
Usabili ty: (Cronbach 0.96)
has a simple layout for its contents was
is easy to use was
is well organized was
is a clear design was
In general, is user-friendly was
Navigation: (Cronbach 0.81, interitem correlation 0.68)
is easy to go back and forth between pages was
provides a few clicks to locate information was
In general, is easy to navigate was
Appendix E. Measurement Scales for Satisfactionfor Information and Features of theWeb Site
All items (except the last item) are measured on a continuous 11-
point semantic differential scale.
Satisfaction with Information Quality (Cronbach 0.97)
Onlybased on the information provided by the assigned Website,
please indicate your views regarding the overall quality of infor-
mation in making your purchase decision.
After using the Web site, information that you obtained made
you:
Very dissatisfied vs. Very satisfied
Very displeased vs. Very pleased
Frustrated vs. Contented
Disappointed vs. DelightedSatisfaction with System Quality (Cronbach 0.98)
Onlybased on the information provided by the assigned Website,
please indicate your views regarding the overall qualityof Websites
features in making your purchase decision.
In terms of the features of the Web site that provide the infor-
mation you need, using the Web site made you:
Very dissatisfied vs. Very satisfied
Very displeased vs. Very pleased
Frustrated vs. Contented
Disappointed vs. Delighted
Overall Satisfaction (Cronbach
0.98) After using this Web site, I am. . .
Very dissatisfied vs. Very satisfied
After using this Web site, I am. . .
Very displeased vs. Very pleased
Using this Web site made me. . .
Frustrated vs. Contented
After using this Web site, I am. . .
Terrible vs. Delighted
Using this Web site is. . .
Will never recommend it to my friends vs.
Will definitely recommend it to my friends
After using this Web site, I. . .
Will never use it again vs. Will definitely use it again
Appendix F. Confirmatory Factor Loadings in theMeasurement Model
The measurement model is shown in Figure F1, and confirmatory
factor loadings for the constructs using first-order factors are re-
ported in Table F1. Table F2 contains the confirmatory factor load-
ings for satisfaction. Table F3 reports factor loadings for the con-
structs based on second-order factors. The factor loadings are
computed within the estimated model using Mplus. The t values are
reported in parentheses. The robust method of estimation in Mplus
results in factor loadings above one for high level of loadings.
ReferencesAbels, Eileen G., Marilyn Domas White, Karla Hahn. 1997. Identi-
fying user-based criteria for Web pages. Internet Res. Electronic
Networking Appl. Policy 7(4) 252262.
Alba, Joseph, John Lynch, Barton Weitz, Chris Janiszewski, Richard
Lutz, Alan Sawyer, Stacy Wood. 1997. Interactive home shop-
ping: Consumer, retailer, and manufacturer incentives to par-
ticipate in electronic marketplaces. J. Marketing 61(3) 3853.
Alwin, Duane. 1973, 1974. Approaches to the interpretation of rela-
tionships in the multitrait-multimethod matrix. Sociological
Methodology 79105.
Bagozzi, Richard, Youjae Yi. 1991. Multitrait-multimethod matrices
in consumer research. J. Consumer Res. 17(4) 426439.
, , Lynn Phillips. 1991. Assessing construct validity in or-
ganizational research. Admin. Sci. Quart. 36(3) 421458.
Bailey, James, Sammy W. Pearson. 1983. Development of a tool for
measuring and analyzing computer user satisfaction. Manage-
ment Sci. 29(5) 530545.
Bakos, Yannis. 1998. The emerging role of electronic marketplaces
on the Internet. Comm. ACM 41(8) 3542.
Bollen, K. A. 1989. Structural Equations with Latent Variables. John
Wiley, New York.
Boudreau, Marie, David Gefen, Detmar Straub. 2001. Validation in
IS research: A state-of-the-art assessment. MIS Quart. 25(1) 1
24.
-
8/3/2019 The Measurement of Web-Customer
19/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
314 Vol. 13, No. 3, September 2002
Gefen, David, Detmar Straub, Marie-Claude Boudreau. 2000. Struc-
tural equation modeling and regression: Guidelines for re-
search practice. Comm. Assoc. Inform. Systems 4(August) 179.
GUV WWW Survey. 1998. www.gvu.gatech.edu/user_surveys/survey-199810.
Ives, Blake, Margrethe H. Olson. 1984. User involvement and MIS
success: A review of research. Management Sci. 30(5) 586603.
Jarvenpaa, Sirrka L., Peter A. Todd. 1996, 1997. Consumer reactions
to electronic shopping on the World Wide Web. Internat. J. Elec-
tronic Commerce 1(2) 5988.
Katerattanakul, Pairin, Keng Siau. 1999. Measuring information
quality of Web sites: Development of an instrument. Proc. In-
ternat. Conf. Inform. Systems. Charlotte, NC, 279285.
Kettinger, William J., Choong C. Lee. 1997. Pragmatic perspectives
on the measurement of information systems service quality.
MIS Quart. 21(2) 223240.
King, William R., Barry J. Epstein. 1983. Assessing information sys-
tem value: An experimental study. Decision Sci. 14(1) 3445.Kline, Rex B. 1998. Principles and Practice of Structural Equation Mod-
eling. Guilford Press, New York.
Kotler, Philip. 1997. Marketing Management: Analysis, Planning, Im-
plementation, and Control. Prentice Hall, Engelwood Cliffs, NJ.
Larcker, David F., V. Parker Lessig. 1980. Perceived usefulness of
information: A psychometric examination. Decision Sci. 11 121
134.
LaTour, Stephen A. 1979. Conceptual and methodological issues in
consumer satisfaction research. William L. Wilkie, ed.Advances
in Consumer Research. Association for Consumer Research, Ann
Arbor, MI, 431437.
Lohse, Gerald L., Peter Spiller. 1998. Electronic shopping. Comm.
ACM 41(7) 8187.
Moore, Gary C., Izak Benbasat. 1991. Development of an instrumentto measure the perceptions of adopting an information tech-
nology innovation. Inform. Systems Res. 2(3) 192222.
Muthen, B. O. 1984. A general structural equation model with di-
chotomous, ordered categorical, and continuous latent variable
indicators. Psychometrika 49(1) 115132.
, Linda Muthen. 2001. Mplus: The Comprehensive Modeling Pro-
gram for Applied Researchers User Guide. Muthen and Muthen,
Los Angeles, CA.
Novak, Thomas P., Donna L. Hoffman, Yiu-Fai Yung. 2000. Mea-
suring the customer experience in online environments: A
structural modeling approach. Marketing Sci. 19(1) 2242.
Nunnally, J. C. 1967. Psychometric Theory. McGraw-Hill, New York.
Oliver, Richard L. 1977. Effect of expectation and disconfirmation on
postexposure product evaluations: An alternative interpreta-tion. J. Appl. Psych. 62(4) 480486.
. 1980. A cognitive model of the antecedents and consequences
of satisfaction decisions. J. Marketing Res. 17 460469.
. 1989. Processing of the satisfaction response in consumption:
A suggested framework and research propositions. J. Consumer
Satisfaction, Dissatisfaction and Complaining Behavior 2 116.
Olson, Jerry, Philip A. Dover. 1979. Disconfirmation of consumer
expectations through product trail. J. Applied Psych. 64(2) 179
189.
Decision Sci. 29(4) 839869.
, Weidong Xia, Gholamreza Torkzadeh. 1994. A confirmatory
factor analysis of the end-user computing satisfaction instru-
ment. MIS Quart. 18(4) 453461.
Dumas, Joseph S.,JaniceC. Redish. 1993.A PracticalGuide to Usability
Testing. American Institutes for Research, Ablex PublishingCorporation, Norwood, NJ.
Eighmey, John. 1997. Profiling user responses to commercial Web
sites. J. Advertising Res. (June) 5966.
, Lola McCord. 1998. Adding value in the information age: Uses
and gratifications of sites on the World Wide Web. J. Bus. Res.
41(3) 187194.
Fornell, C., D. F. Larcker. 1981. Evaluating structural equation mod-
els with unobservable variables and measurement error.J.Mar-
keting Res. 18(1) 3950.
Boulding, William, Ajay Kalra, Richard Staelin, Valarie A. Zeithaml.
1993. A dynamic process model of service quality: From expec-
tations to behavioral intentions. J. Marketing Res. 30(February)
727.Brancheau, J. C., Brian D. Janz, James C. Wetherbe. 1996. Key issues
in information systems management: 19941995 SIM Delphi
Results. MIS Quart. 11(1) 225241.
Bruce, Harry. 1998. User satisfaction with information seeking on
the Internet. J. Amer. Soc. Inform. Sci. 49(6) 541556.
Cadotte, Ernest R., Robert B. Woodruff, Roger L. Jenkins. 1987. Ex-
pectations and norms in models of consumer satisfaction. J.
Marketing Res. 24(3) 305314.
Campbell, Donald T., Donald W. Fiske. 1959. Convergent and dis-
criminant validation by the multitrait-multimethod matrix.
Psych. Bull. 56(2) 81105.
Chin, Wynne. W. 1998. Issues and opinion on structural equation
modeling. MIS Quart. 22(1) viixvi.
Churchill, Gilbert A., Jr. 1979. A paradigm for developing bettermeasures of marketing constructs. J. Marketing Res. 16(1) 6473.
, Carol Surprenant. 1982. An investigation into the determinants
of customer satisfaction. J. Marketing Research 19(4) 491504.
Cook, Thomas D., Donald T. Campbell. 1979. Quasi-Experimentation:
Design and Analysis Issues for Field Settings. Houghton Mifflin,
Boston, MA.
Cronin Jr., J. Joseph, Steven A. Taylor. 1992. Measuring servicequal-
ity: A reexamination and extension. J. Marketing 56(3) 5568.
Davis, Fred D. 1989. Perceived usefulness, perceived ease of use, and
user acceptance of information technology. MIS Quart. 13(3)
319339.
, Richard P. Bagozzi, Paul R. Warshaw. 1989. User acceptance
of computer technology: A comparison of two theoretical mod-
els. Management Sci. 35(8) 9821003.DeLone, W. H., E. R. McLean. 1992. Information systems success:
The quest for the dependent variable. Inform. Systems Res. 3(1)
6095.
Doll, William J., Gholamreza Torkzadeh. 1988. The measurement of
end-user computing satisfaction. MIS Quart. 12(2) 259274.
, Anthony Hendrickson, Xiaodong Deng. 1998. Using Daviss
perceived usefulness and ease-of-use instruments for decision
making: A confirmatory and multigroup invariance analysis.
-
8/3/2019 The Measurement of Web-Customer
20/20
McKINNEY, YOON, AND ZAHEDI
The Measurement of Web-Customer Satisfaction
Information Systems Research
Vol 13, No 3, September 2002 315
Palmer, Jonathan W., David A. Griffith. 1998. An emerging model
of Web site design for marketing. Comm. ACM 41(3) 4551.
Patterson, Paul G.,LesterW. Johnson, RichardA. Spreng. 1997. Mod-
eling the determinants of customer satisfaction for business-to- business professional services. Acad. Marketing Sci. J. 25(1) 4
17.
Pedhazur, E. J., Liora Pedhazur Schmelkin. 1991. Measurement, De-
sign, and Analysis: An Integrated Approach. Lawrence Erlbaum
Associates, Hillsdale, NJ.
Perdue, Barbara C., John O. Summers. 1986. Checking the success of
manipulations in marketing experiments.J. Marketing Res. 23(4)
317326.
Pfaff, Martin. 1977. The index of consumer satisfaction: Measure-
ment problems and opportunities. H. Keith Hunt, ed. A Con-
ceptualization and Measurement of Consumer Satisfaction and Dis-
satisfaction. MarketingScience Institute, Cambridge, MA,3671.
Pitt, Leyland F., Richard T. Watson, C. Bruce Kavan. 1995. Service
quality: A measure of information systems effectiveness. MISQuart. 19(2) 173187.
, , C. Bruce Kavan. 1997. Measuring information systems
service quality: Concerns for a complete canvas. MIS Quart.
21(2) 209221.
Saracevic, Tefko, Paul