what determines consumers' ratings of service providers? an exploratory study of online...
TRANSCRIPT
This article was downloaded by: [Deakin University Library]On: 29 September 2013, At: 13:27Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Journal of Hospitality Marketing &ManagementPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/whmm20
What Determines Consumers' Ratings ofService Providers? An Exploratory Studyof Online Traveler ReviewsPradeep Racherla a , Daniel J. Connolly b & Natasa Christodoulidou ca Department of Marketing, West Texas A&M University, Canyon,Texas, USAb Department of Business Information and Analytics, Daniels Collegeof Business, University of Denver, Denver, Colorado, USAc Department of Management and Marketing, College of BusinessAdministration and Public Policy, California State UniversityDominguez Hills, Carson, California, USAAccepted author version posted online: 26 Mar 2012.Publishedonline: 10 Jan 2013.
To cite this article: Pradeep Racherla , Daniel J. Connolly & Natasa Christodoulidou (2013) WhatDetermines Consumers' Ratings of Service Providers? An Exploratory Study of Online Traveler Reviews,Journal of Hospitality Marketing & Management, 22:2, 135-161, DOI: 10.1080/19368623.2011.645187
To link to this article: http://dx.doi.org/10.1080/19368623.2011.645187
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
Journal of Hospitality Marketing & Management, 22:135–161, 2013Copyright © Taylor & Francis Group, LLCISSN: 1936-8623 print/1936-8631 onlineDOI: 10.1080/19368623.2011.645187
What Determines Consumers’ Ratingsof Service Providers? An Exploratory Study
of Online Traveler Reviews
PRADEEP RACHERLADepartment of Marketing, West Texas A&M University, Canyon, Texas, USA
DANIEL J. CONNOLLYDepartment of Business Information and Analytics, Daniels College of Business,
University of Denver, Denver, Colorado, USA
NATASA CHRISTODOULIDOUDepartment of Management and Marketing, College of Business Administration and Public
Policy, California State University Dominguez Hills, Carson, California, USA
Online reviews are important sources of information for travel-ers. However, existing academic understanding of these popularinformation sources in the tourism and hospitality domain is rel-atively weak. In this study, we explore the patterns and features ofonline reviews extracted from a popular travel advisory Web site.We consider factors such as numerical rating distribution, amountof information in the reviews and the relationship between reviewratings and various attributes of the lodging properties reviewedon the Web site. The analysis reveals that the reviews are heavilyskewed towards positive ratings and there is a paucity of balancedand negative reviews. Further, the correlation between review rat-ing and ratings on individual attributes is very low suggesting thatthe overall numerical ratings typically used in review systems maynot be the ideal indicators of customers’ perceived service qualityand satisfaction. Textual analysis uncovers nuanced opinions thatare generally lost in crude numerical ratings. The implications ofthis study for future research and practice are discussed.
KEYWORDS electronic word of mouth (eWOM), online reviews,rating distribution, perceived service quality, service attributes,electronic viral marketing
Address correspondence to Natasa Christodoulidou, College of Business Administrationand Public Policy, California State University Dominguez Hills, 1000 East Victoria St., Carson,CA 90747, USA. E-mail: [email protected]
135
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
136 P. Racherla et al.
INTRODUCTION
Online reviews, a form of electronic word-of-mouth (eWOM), have becomeimportant information sources for consumers ever since the Internet perme-ated the tourism and hospitality industry. These platforms allow consumersto socially interact with one another, exchange product-related information,and make informed purchase decisions. Given that consumers are increas-ingly relying on search engines for information search, consumer-generatedreviews will inevitably change the structure and accessibility of services-related information, and subsequently, consumers’ perceptions of variousservices. A study by Forrester Inc. (2006) found that more than 80% of Webshoppers use other consumers’ reviews. Similarly, eMarketer’s study (2010)found that nearly 62% of purchasers visited a message board, forum, oronline community for their online travel purchasing and one in three ofthese buyers said that consumer reviews helped with their purchase deci-sion. Almost half of these consumers said that consumers’ opinions actuallycaused them to change their mind about what they purchased. Moreover,among those buyers, 25 million said they also posted a review on a con-sumer review site after making their purchase. In addition, several recentreports by prominent firms such as the one by Nielsen (2009) suggested thatthe great majority of online consumers now use and trust online reviewsmore than any other form of information. In fact, a recent McKinsey study(2011) suggests that WOM and nonmarketer controlled information sourcesaccount for almost 43% of effectiveness in the consideration and purchasestages of consumers’ decision-making process.
Online reviews have attracted considerable research in the recent past(Chevalier & Mayzlin, 2006; Mudambi & Schuff, 2010; Lim & Chung, 2011;Zhu & Zhang, 2010). However, extant research has mainly focused on prod-ucts and related issues. As a result, the existing academic understanding ofeWOM in the services industry, especially the tourism and hospitality domainis relatively weak when compared with other domains. Tourism and hospital-ity services are intangible and do not have the “try before you buy” or “returnin case not satisfied” features. This increases the perceived risk for the con-sumers (Murray & Schlacter, 1990; Zeithaml, 1981). Consequently, WOM isseen as highly influential within a services context (Bansal & Voyer, 2000).
In this study, the main focus is on the travel industry. In the recent past,a growing body of research has explored tourism and travel related onlineWOM and consumer-generated media in general (e.g., Litvin, Goldsmith,& Pan, 2007; Stringam & Gerdes, 2010; Wang & Fesenmaier, 2003; Xiang &Gretzel, 2009; Yoo & Gretzel, 2008). For instance, Yoo and Gretzel (2008) andYoo and Gretzel (2011) investigated the motivations and sociodemographiccharacteristics of hotel review writers in popular sites such as TripAdvisor(www.tripadvisor.com). They find that review writers are mainly driven bythe concern for other travelers, the need to help service providers improve
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 137
their services, and the need for positive self-enhancement. In addition, thesestudies find that personality is the most important variable that explains trav-elers’ propensity to write reviews. Similarly, Bronner and de Hoog (2011)conducted a comprehensive survey of Dutch vacationers and found thatother-directed motivations (in other words, the need to help others) are theprimary drivers of review writers. Further, while the need to “harm others”is an important factor that figures in consumer responses, it never assumeshigh frequencies suggesting that while reviews can be tools to harm others,they are not very often wielded by the travelers. Consequently, travelers aremore likely to provide positive rather than negative reviews by a margin ofalmost 3 to 1. In an interesting study, Vermeulen and Seegers (2009) appliedthe consideration set theory to test the impact of online reviews on travelers’hotel booking process. They found that, on average, both positive and nega-tive reviews enhance consumer awareness of hotels. In this, positive reviewstend to improve overall attitude towards the hotels, and this affect is con-siderably more for lesser-known hotels. More recently, Stringam and Gerdes(2010) analyzed 60,648 consumer ratings and comments from an online dis-tribution site to explore the factors that drive consumer ratings of hotels. Thestudy combines ratings assigned by travelers with text in their commentsto identify the most frequent commendations and concerns. The study alsoexplored word choices of guests scoring hotels at lower ratings versus higherratings. Such studies emphasize that online reviews are key aids employedby travelers in their decision making process. However, the aforementionedresearch has overlooked certain interesting aspects of online reviews:
1. Online reviews typically use “overall star ratings” as primary indicatorsof perceived service quality. (Please note that the overall star ratingreferred to in this study is not the hotels’ star classification but the over-all quality rating assigned to each property based on consumer reviewsand ratings.) These ratings are typically aggregated from the ratingsgiven by individual consumers in each review. However, researchershave still not answered the underlying question: How do consumersdetermine the stars they assign to a service when they post a reviewin an online review system? Do star ratings accurately reflect individ-ual consumers’ sentiments regarding various attributes of the serviceprovider? This is a vital issue given the experiential and intangiblenature of services, and the fact that consumers use the overall ratingas an important heuristic while narrowing down choices (Hu, Zhang,& Pavlou, 2010).
2. Further, previous studies have overlooked the relationship betweenthe reviews and various attributes of services and service providers.Since online reviews are a form of WOM, variables such as con-sumer involvement, pricing, and length-of-stay may have an impact onthe final rating and posting of the review provided by the consumer.It is important to uncover these relationships to better understand thedynamics of online reviews. Gaining a better understanding of the
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
138 P. Racherla et al.
associations within the various attributes of the properties and travelerreviews may lead to an improvement of the services provided and adecrease in the postings of negative reviews.
3. Few studies (with the exception of Stringam & Gerdes, 2010) haveanalyzed the textual content of online reviews. Online reviews areessentially open-ended, text-based, consumer-to-consumer communi-cations. The textual content may contain nuanced views of the services(from the review writers’ point of view) that cannot be expressedusing crude numerical ratings (Ghose & Ipeirotis, 2008). Hence thereis a need to analyze the textual portion of the reviews and identify theissues that consumers are mostly concerned with when writing andposting online reviews.
The aforementioned limitations are addressed in this article. The dataon reviews was obtained from FlipKey.com, a popular travel advisory Website dedicated to vacation rental properties across the world. Within this site,the focus is on the reviews of a random sample of vacation properties in theUnited States. The primary research questions that motivated this study areas follows:
RQ1: Do overall ratings reflect the perceived quality of the service providers(as derived from the consumers’ expressed sentiment in the reviews)?What are the underlying patterns and the distribution of the overallratings in online reviews of vacation rental properties?
RQ2: What are the linkages among the various attributes of the vacationrental properties and customer reviews?
RQ3: What are the issues that consumers mostly talk about in the textualportion of online reviews? Do these issues differ from the ratings thatthe consumers have provided via the standard variables provided bythe review sites?
This article is structured as follows: we first present the literature thatinformed the research questions and the hypotheses that guided this study.Subsequently, we describe the data set and the research setting. We thenpresent the analysis and the results. Finally, we discuss the significance ofthe results and suggest the implications for future research and practice.
LITERATURE REVIEW AND HYPOTHESES
Review Rating Distribution
Web sites that host online reviews give consumers the option to assign arating (either 1–5 or 1–7 Likert-type scales) to the service providers. Reviewssites typically aggregate these individual ratings to determine an overall starrating (henceforth referred to as a star rating) to a service provider. Starrating is an important element in online WOM since consumers very often
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 139
use this rating as an important heuristic to narrow down their considera-tion set. Further, due to the significance attached to the numerical rating,the distribution of star ratings has been one of the most studied features ofonline review systems. Numerous eWOM studies have used the rating as theprimary indicator of service providers’ perceived quality in order to deter-mine the impact of online reviews on product sales; these studies arrived atmixed findings. One group of studies (Chevalier & Mayzlin, 2006; Li & Hitt,2004) found a positive relationship between star rating and factors such asproduct sales and external influence propensity of online reviews. On theother hand, Duan, Gu, and Whinston (2005) and Chen, Wu, and Yoon (2004)found that star ratings do not accurately predict sales, and even if they do,only the reviews in the upper quartile (4-star and 5-star rating) tend to bemore accurate predictors than the reviews in the lower quartile.
However, the aforementioned studies are based on the fundamentalassumption that the numerical ratings assigned to service providers arean accurate estimate of perceived quality as it relates to customers’ sen-timents. This practice follows the basic assumption of behavioral theoriststhat any data set with a wide range of item difficulties, a variety of con-tent or forms, and products that have a significant correlation with thesum of all other scores will end up having a normal (or a Gaussian) dis-tribution (Jensen, 1969). Yet, as some recent studies have revealed, thismight not be the case. Dellaracos and Narayan’s (2006) study on moviereviews on Yahoo.com presents empirical evidence that there exists aU-shaped relationship between the average valence of a movies’ reviewswith consumers’ average inclination to engage in WOM (in this case, writea review). This implies that only the extremely satisfied and extremely dis-satisfied customers tend to publicly express their opinions while customerswith moderate opinions are less likely to express their opinions. Similarly,studies using data from Tripadvisor.com (Talwar, Jurca, & Boi, 2007) andAmazon.com (Chevalier & Mayzlin, 2006) found that the star ratings exhibita truncated distribution with majority of the reviews being positive. Basedon this contemporary evidence we hypothesize as follows:
H1: The ratings of the reviews will show an asymmetric distribution. Thenumber of reviews with extremely high ratings will be greater thanreviews with moderate and low ratings.
Star Ratings and Perceived Service Quality
As discussed in the introduction, researchers are yet to answer the question:Are star ratings a true representation of service providers’ perceived qualityas is reflected by the consumers’ sentiments in the reviews? This becomesan important question for two reasons: (a) ratings are the primary evidencethat consumers consider while using online review sites for information and(b) the truncated nature of rating distributions typically observed in online
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
140 P. Racherla et al.
review sites means that the reviews available may not be the true represen-tative sample of consumer opinion and also the service providers’ perceivedquality. For instance, if the aforementioned assumption is true, consumersshould always look for reviews (and service providers) that have a 5-starrating. On the contrary, if the above assumption is not true, then consumersare better off parsing for reviews with a rating of 3 stars since these tend tobe more balanced and enable better product diagnosticity. Recently Hu et al.(2009), using data from Amazon.com, showed that numerical score does notreflect true perceived quality. As Hu et al. (2009, p. 7) suggested:
Rather, the score reflects the balance of diverse opinions. In other words,when a book’s overall score is around 3, it does not suggest that con-sumers generally agree that this is an average book. It rather suggeststhat roughly equal number of consumers think that the book is either anoutstanding book or abysmal book.
H2: The correlation between star ratings and other attributes on whichconsumers rate the service providers will be low.
Relationship Between Ratings and Review Attributes
To understand consumers’ WOM behavior, it is important consider variousfactors such as perceived expectations of costs and benefits, importance ofthe purchase (involvement), personal characteristics and situational influ-ences (e.g., Cho, Im, Hiltz, & Fjermestad, 2002; Richins, 1982) This studyconsiders two factors that are known to impact the intensity and valence(positive or negative) of WOM:
Price. It is a well-established fact that intensity of expectations (andsubsequently the extent of fulfillment of these expectations) is directly pro-portional to consumers’ satisfaction levels, and subsequently their propensityto engage in WOM (Yuksel & Yuksel, 2001). One of the factors that con-tribute to prepurchase expectations is the importance that the consumersassign to the purchase price. For instance, consumer behavior literature sug-gests that the tendency to complain is directly related to the cost of theservice (Bearden & Teel, 1983). The more expensive an item, the greater willbe the losses. Therefore, dissatisfaction will more likely result in complaintbehavior as both the amount of loss and the importance of the purchaseincreases. Richins (1983) asserted that complaints vary directly with the mag-nitude of the loss associated with the problem. Similarly, Bearden and Mason(1984) found that the likelihood of WOM being triggered by the level ofexpectation–disconfirmation increases with the cost associated with the prod-uct. Hence, one can speculate that the higher the prices, higher will be theaverage negative or positive ratings for a service.
H3: Reviews with a negative rating are generally associated with proper-ties that charge higher daily rates.
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 141
Word Count. An online review is essentially an argument made by thereviewer to either encourage or dissuade other consumers from buying aparticular product or service. Therefore, the number of words in a reviewmay serve as a proxy measure for the amount of information in the reviewsthat helps other consumers reading the review in their decision making pro-cess. Some studies (Yoo & Gretzel, 2008; Wang & Fesenmaier, 2003; Wetzer,Zeelenberg, & Pieters, 2007) suggest that quest for social approval, need tohelp others, expectancy, and desire to control the perceived quality of theproduct are some of the most important psychological incentives that driveconsumers to provide online reviews. These psychological factors combinedwith the extent of consumer involvement will determine the amount of timeand effort a review writer is willing to spend in providing a detailed accountof their experiences with property service provider.
Similarly, Richins (1984) suggested that WOM is closely related withconsumers’ desire to reduce cognitive dissonance as well as to appear smartand well informed (Richins, 1984). Further, Sundaram et al. (1998) found thatproduct involvement is another factor that drives the inclination to engage inand the extent of WOM activity. Word count can be used as the proxy mea-sure of the extent of customers’ satisfaction or dissatisfaction. Past research(Schlossberg, 1991; Westbrook, 1987) found ample evidence to suggest thatdissatisfied customers engage in between 2 and 3 times as much WOM assatisfied customers. It can be reasonably assumed that since online WOM isessentially text-based communication, the extent to which consumers engagein WOM can be measured by both the number of reviews they write aswell as the amount of information they provide via the reviews to otherconsumers. As a result, it is possible that extremely positive or extremelynegative ratings will have greater word count as opposed to more moderatereviews (with 2-, 3-, and 4-star ratings).
H4: Reviews with either extremely negative or positive ratings will have ahigher word count when compared to reviews with moderate ratings.
Textual Content in the Reviews
Recent research emphasizes the importance of the actual textual contentin consumer reviews. This is even more important for the hospitality andtourism industry that provides services that are subjective and heteroge-neous in nature. Textual comments provide fine-grained information abouta service provider’s reputation that is likely to engender a buyer’s trust inthe service provider’s competence and credibility (Ghose & Ipeirotis, 2008).Consequently, a whole new industry has been built on topics related toopinion mining and sentiment analysis using textual content in reviews.
Seminal studies in the recent past hast have provided evidence on theimportance of the textual content. For instance, Pavlou and Dimoka (2006)based on data from eBay found that reputation information found in the
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
142 P. Racherla et al.
textual content explains to a high degree (R2 = 50% when compared to anR2 = 20% in studies that use only numerical ratings) the premium ratescharged by certain sellers when compared to the others selling similarproducts. Similarly, Archak, Ghose, and Ipeirotis (2011), using data fromAmazon.com, found that the actual style of the review plays an impor-tant role in determining the impact of the review. Reviews that confirm theinformation contained in the product description are the more important forfeature-based products, while reviews that give a more subjective point ofview are more important for experiential goods, such as music and movies.Similarly, they found that the style of a review reflects consumer sentimentand can also influence sales and pricing power of the listed merchants. Whilenumerical ratings allow the reviewers to rate the service provider on variousattributes, it is the actual text that provides them with the opportunity to artic-ulate the nuances of their overall experience and convey useful informationabout a service provider’s transactions and service capabilities that cannotbe fully captured with crude numerical ratings. Therefore, analysis of thetextual content provides us with an opportunity to understand an importantquestion: What do consumers talk about in the textual portion of the reviewsand how do these issues relate to the numerical ratings in the reviews?
METHODOLOGY
Research Setting and Data Collection
This study is based on reviews collected from FlipKey.com, a community-driven travel advisory Web site. The site provides reviews of verified rentalproperties and organizes the vacation properties based on their ratings. Theproperty organization within the site is based not on the revenue sharingmodel that is generally practiced by other review sites but on the repu-tation, trust, and feedback from past customers who have stayed at thevacation rental properties. FlipKey.com is a free service that operates onservice provider fees and advertising revenues. Another differentiating factoris the firm’s review soliciting strategy. In many review sites, customers canlogin and post reviews without the intervention of the Web site management;on the other hand, FlipKey.com sends personalized invitations to customerswhose stay has been verified by the property management. The manage-ment of the firm believes that this strategy, to a large extent, eliminates fakereviews by ensuring that only those customers who have experienced thefacilities are given the opportunity to express their opinions.
Sampling Procedure
The firm provided the authors with approximately 3,300 reviews that containthe review text and the overall star-rating of the property; ratings given by the
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 143
consumers on specific attributes such as cleanliness, value for money, check-in, and location; details about the property such as average daily price andnumber of bedrooms; as well as the characteristics of the reviewer, includingage. The reviews were posted between June 2007 and October 2008 by theconsumers of a random sample of properties listed on the Web site. All theaforementioned ratings are captured on a 1–5 Likert scale. After initial dataclean-up, about 100 reviews were eliminated with incomplete or unusableinformation and 3,197 reviews were eligible to be used for the analysis. Thefollowing attributes in the reviews were used for the analysis:
● The overall rating assigned to the service provider based on the aggre-gation of the ratings assigned by individual reviewers (consumers);
● Consumers’ rating on six attributes: value, cleanliness, comfort, service,location, and check-in;
● The review text (semantic processing techniques were applied on theaggregate review text to identify the nuanced opinions and concernsof the travelers who write reviews);
● Age range of the reviewer on an 1–4 ordinal scale (18–34, 35–44, 45–54,and 55 and above);
● Average price per day; and● Word count (the number of words in each review was calculated to
create a separate variable called “word count” that was used in thequantitative analysis).
Table 1 provides the descriptive statistics of the data. The age groups ofthe review writers were as follows: 18–34 (32%), 35–44 (36%), 45–54 (19%),and 55 and above (13%.) A large percentage of the customers who reviewedthe properties were under 44 years of age. This is consistent with previousstudies in this realm, which show that Generation X and Generation Y aremore active in consumer-to-consumer forums.
TABLE 1 Descriptive statistics
Variable n Minimum Maximum M SD
Star 3185 1 5 4.37 0.91Value 3185 1 5 4.11 1.03Check-in 3183 1 5 4.42 0.92Location 3184 1 5 4.55 0.73Clean 3183 1 5 4.18 1.09Comfort 3183 1 5 4.24 1.01Service 3182 1 5 4.36 0.96Price 3197 11 955 158.17 142.73Recency 3185 0.43 51.57 9.45 8.82Word count 3185 1 746 91.92 65.22
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
144 P. Racherla et al.
HYPOTHESES TESTING AND RESULTS
Quantitative Analysis
To test Hypothesis 1, a simple frequency chart was run. The distribution ofthe ratings is shown in Figure 1. As the results indicate, the distribution isheavily skewed towards the positive ratings: 1-star reviews = 32 (1%); 2-starreviews = 95 (3%); 3-star reviews = 458 (14%); 4-star reviews = 688 (22%);5-star reviews = 1,912 (60%). To test for the robustness of this distribution,we applied the Kolmogorov-Smirnov normality test that confirms the non-normal distribution of the star ratings (statistic = 0.36; df = 3,294; p < .001).While this result is concurrent with some of the studies mentioned previ-ously, the data, as shown in Figure 1, shows a J-shaped distribution. This is astrong indication that the distribution of the ratings is nonnormal and heavilyskewed towards positive reviews.
To test Hypothesis 2, a regression equation was applied with the sixproperty attributes as independent variables and star rating as the dependentvariable. This analysis was based on the assumption that for a given property,if star ratings truly represent the perceived quality of the properties (as wellas the consumers’ sentiment), then the star ratings and ratings of the six otherproperty attributes will show a perfect (or at least very high) correlation. Forthe analysis, we took into consideration the fact that the positive reviews(reviews with 4- and 5-star ratings) dominate the corpus. Therefore, thereviews were divided into two categories, high reviews with star rating above3 (n = 2,597) and low reviews with rating below or equal to 3 (n = 600) and
FIGURE 1 Distribution of the overall star rating (color figure available online).
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 145
TABLE 2 Regression analysis of low and high reviews
Low reviews t value High reviews t value
(Constant) 1.618 18.353∗∗∗ 2.400 34.186∗∗∗
Value 0.254 13.726∗∗∗ 0.378 8.767∗∗∗
Check in −0.083 −3.701 −0.053 −1.192Location 0.104 6.132 0.079 2.117Clean 0.144 7.574∗∗∗ 0.112 2.808∗∗∗
Comfort 0.267 13.993∗∗∗ 0.129 3.088∗∗∗
Service 0.098 4.209∗ 0.107 2.401R2 0.331 0.338F statistic 50.38 247.14
Notes. Coefficients (t statistics reported). Dependent variable is overall rating.∗∗∗Significant at 0.001 level. ∗∗Significant at .05 level. ∗Significant at 0.10 level.
ran separate analysis on these reviews to identify the correlations. Table 2shows the results of the regression analysis. We tested for multicollinearityand found that the tolerance for all the variables was significantly greaterthan the general cut-off point of 0.2.
An interesting finding is the gap between the attribute ratings and overallstar rating for each property. The data reveals that the R-squared of theregression equation with all the six variables and the overall star ratings is lessthan 40% (i.e., even if the consumers provided higher ratings on individualattributes, their overall rating was usually low and vice versa). This trend wasdetected in both high and low reviews. The results show that only value-for-money, cleanliness and comfort were the most important attributes that drivethe positive or negative ratings. To illustrate this gap, we provide samplesof one high and one low review in Table 3. In the case of the high review,the reviewer gave low to moderate ratings on all the attributes but wasstill willing to provide a very high star rating to the property. On the otherhand, in the case of the low review, the reviewer provided moderate to highratings on most attributes but due to one or two issues provided an overalllow rating. The regression results strongly reflect the contentious “factorsas components versus antecedents” argument in service quality literature(Dabholkar et al., 2000). Essentially, these results suggest that various factorsrelated to quality are mere antecedents to consumers’ evaluations of overallquality (Getty & Thompson, 1995).
To test Hypotheses 3 and 4, a series of ANOVA was applied with priceand word count as dependent variables and the review rating as factors. Theresults are shown in Table 4 and illustrated in Figures 2 and 3. However,the results of the ANOVA could be an artifact of the large difference in thenumber of reviews under each star rating (see Table 4). To confirm the tests,the two variables were standardized and simple regression tests were ranwith star rating as the dependent variable.
Table 4 shows certain interesting results. Reviews with lower ratingsare typically associated with greater word count and generally represent
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
TAB
LE3
Exa
mple
ofa
hig
hre
view
Rev
iew
sSt
arra
ting
Avg
.ra
ting
Val
ue
Chec
k-in
Loca
tion
Cle
anCom
fort
Serv
ice
Hig
hre
view
:“T
his
was
our
1sttim
ere
ntin
ga
house
vs.a
condo.This
house
was
grea
t.If
you
wan
ted
tosp
end
alit
tletim
eby
yours
elfyo
uco
uld
,so
man
yro
om
sto
goto
.The
house
was
soco
mfo
rtab
le,lik
ebei
ng
athom
e.It
was
our
hom
efo
rth
ew
eek.
Than
kyo
u.H
ope
tore
nt
itnex
tye
arif
we
hav
en’t
bough
tone
ours
elve
s.”
53
32
32
43
Low
revi
ew“T
his
pro
per
tyw
asgr
avel
ym
isre
pre
sente
d.
No
fire
pla
ce,dirty
,tras
hin
the
doorw
ayw
ithcl
eanin
geq
uip
men
t.The
ther
most
atw
ason
the
hea
ter
and
would
notpro
per
lyhea
tth
ehouse
.W
ew
ere
cold
unco
mfo
rtab
lean
dripped
off.The
man
agem
entdid
cred
itus
for
anoth
erca
bin
on
are
turn
trip
nex
tye
ar.”
13
15
51
15
146
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 147
TABLE 4 Results of ANOVA with word count and price
Variable Text df M2 F Sig.
Between groups 4 274789.2 70.21 0.001Word count Within groups 3180 3914.0
Total 3184Between groups 4 98652.5 4.88 0.001
Price Within groups 2192 20230.1Total 2196
FIGURE 2 Differences in average word count across star ratings.
properties that charge high rates. The result pertaining to the word count sug-gests that consumers who are unhappy with a rental property tend to writemore to express their dissatisfaction. As shown in this figure, lower ratingsare associated with higher number of words in the reviews and vice versa.This result should also be viewed in the context of the star rating distributiondiscussed in previous sections. This correlation would probably have beenmuch higher if not for the overwhelming number of positive reviews in thedata set. Similarly, price of the property seem to be strongly associated withnegative reviews. As mentioned earlier, higher cost increases the consumers’
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
148 P. Racherla et al.
FIGURE 3 Differences in average price/day across ratings.
involvement in the purchase. This coupled with the intangible and experien-tial nature of the tourism and hospitality services creates greater potential forexpectation-disconfirmation. The data therefore confirms H3 and rejects H4.
Analysis of the Review Text
To conduct the text analysis, we extracted a random sample of 600 eachof high and low reviews. The text sample was analyzed using centeringresonance analysis (CRA), a mode of computer-assisted network-based textanalysis that represents the content of large sets of texts by identifying themost important words that link other words in the network (Corman &Dooley, 2006). CRA calculates the words’ influence within texts, using theirposition in the text’s structure of words (Dooley & Corman, 2002). Thisinfluence is based on words’ coefficient of betweenness centrality, definedby Corman, Kuhn, McPhee, and Dooley (2002) as “the extent to which a par-ticular centering word mediates chains of association in the CRA network”(p. 177). The results of aggregating the possible centers or nodes (the mostinfluential words) in a message denote the author’s intentional acts regardingword choice and message meaning.
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 149
Subsequently, patterns were compared to identify the most significantparts of the text that predict the numerical ratings provided by the reviewers.We focus particularly on the linguistic properties of the reviews and specificvocabulary that reflects the overall opinion or emotion expressed by thereviewers. These can be classified as attributes or features of the property(e.g., rooms, bed, dishes, furniture, service, etc.) that figure prominently andcontribute to the positive or negative ratings, and the positive or negativewords (e.g., great, amazing, dirty, and wonderful) that reflect the sentiment ofthe reviewer regarding the property as well as the aforementioned features.
In the low reviews, the overall tone can be termed as argumentative(i.e., the reviewers tend to lay out great amount of reasoning and providespecific examples to drive home their argument). This inference is basedon the greater percentage of negative connector words (34%) such as “but,”“however,” “as if,” and “even though.” The percentage refers to the percent-age of those words in the overall set of those word types. For example, ofall the connector words in low reviews, 34% are negative connector wordssuch as but and even though when compared to the percentage of posi-tive connector words. It is interesting to note that reviewers in this categoryspeak directly to the other consumers through greater use of words such as“you” (46%) and look to influence their decision-making. Similar results werefound in some recent studies that analyzed the textual content in tourism andhospitality industry related online WOM (Sparks & Browning, 2010).
In the high reviews, the overall tone of these reviews tends to be enun-ciative and the reviewers use the text to reinforce their numerical ratings.As we have shown earlier, the average number of words in these reviewstends to be lesser than the negative reviews. In these reviews, the consumersfocus on their personal experiences (personal pronouns such as “we” and“I”; 54%) and appeal to the audience the use of very positive of positiveadjectives such as great, fantastic, and awesome (45%).
Table 5 and 6 show the main issues/aspects that are prominently dis-cussed in the low reviews. The order of these issues is based on thestatistically significant frequency of their occurrence in the 1,200 reviews.
The use of various sentiment words (as described previously) was fur-ther investigated. Interestingly, the prominent issues discussed in the positivereviews sometimes do not even figure in the rating scales provided tothe reviewers. Overall, the results of the qualitative text analysis can besummarized as follows:
● Extremely positive sentiment words such as “great,” “wonderful,”“amazing,” etc. are more likely to be associated with the property andits features in the high reviews when compared to the low reviews.The properties and their features in the low reviews are more likelyto be characterized with moderately positive words such as “nice” and“okay.”
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
TAB
LE5
Top
5pro
min
entis
sues
inlo
wre
view
s
Pro
min
ently
men
tioned
feat
ure
sFr
eq.
Sam
ple
sente
nce
s
Bed
room
,bed
s,lin
en,sh
eets
383
•Als
oth
esa
me
bed
room
/bat
hro
om
did
nothav
ean
ydoor
separ
atin
gth
ebed
room
from
the
bat
hro
om
!!N
opriva
cy!!
•Sle
epin
gar
range
men
tsw
ere
very
awkw
ard.The
listin
gsh
eet
des
crib
ed2
full
bed
s.W
ebro
ugh
tour
ow
nlin
ens,
the
bed
sw
ere
quee
nsi
zed.H
adto
purc
has
esh
eet.
Kitc
hen
,ki
tchen
war
e,dis
hes
,dis
hw
asher
297
•You
must
notfo
rget
tost
epup
when
you’re
wal
king
outofth
eki
tchen
into
the
din
ing
room
,oth
erw
ise
you’ll
trip
ove
rth
e3’
step
up.
•No
poth
old
ers
inth
eki
tchen
and
itdid
n’t
seem
like
ther
ew
ere
asm
any
kitc
hen
supplie
sas
bef
ore
.W
eju
sthad
tom
ake
due
with
out.
Bea
ch,bea
ches
,bea
chfr
ont
296
•The
wee
kw
asth
ebes
tva
catio
nw
ehav
eev
erhad
asa
fam
ily.The
bea
chhouse
was
acl
ass
act!!
!!!•D
isap
poin
tmen
t#1
:It
looks
like
this
house
isve
rycl
ose
tobea
ch,but
toge
tth
ere
you
hav
eto
goar
ound
thro
ugh
public
pat
h.
Pool,
pools
ide
218
•The
poolw
asto
odirty
touse
.W
ech
ecke
din
on
Thurs
day
and
the
poolw
asnotcl
eaned
until
Satu
rday
.•A
lso
itis
inco
nve
nie
ntto
hav
eto
call
toge
tso
meo
ne
toopen
the
pool.
Did
n’t
under
stan
dw
hy
Jacu
zziw
asopen
and
notth
epool?
Bat
hro
om
(s),
bat
h,bat
htu
b,sh
ow
er20
2•T
he
nex
tpro
ble
mw
asin
anoth
erbat
hro
om
(mai
nfloor)
—th
ew
all
hea
ter
did
notw
ork
sosh
ow
erin
gin
her
ew
asa
bit
chill
y.The
nex
tpro
ble
mw
asin
the
“mas
ter”
bat
hro
om
(connec
ted
toth
ebed
room
with
the
king
bed
).The
slid
ing
show
erdoors
nee
dto
be
tota
llyre
pla
ced.
•Bat
htu
bdra
ins
clogg
edin
thre
ese
cond
floor
bat
hro
om
s.
150
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
TAB
LE6
Top
5pro
min
entIs
sues
in‘H
igh’R
evie
ws
Pro
min
ently
men
tioned
feat
ure
sFr
eq.
Sam
ple
sente
nce
s
Bea
ch,bea
ches
,bea
chfr
ont
506
•Per
fect
renta
lpro
per
tyfo
rth
ose
who
enjo
yth
eac
tivity
ofth
ebea
chan
dth
ese
renity
ofth
eso
und!T
he
house
isju
stdow
nth
ero
adfr
om
oce
an•T
he
larg
eporc
hoffer
eda
bea
utif
ulvi
ewofth
ebea
chan
dco
mfo
rtab
lenew
rock
ing
chai
rsPool,
pools
ide
218
•The
outd
oor
show
er,dip
pin
gpoolan
dsc
reen
inporc
hw
ere
anad
ded
bonus.
Per
fect
house
for
alo
wke
yw
eeke
nd.
•House
was
grea
tfo
rki
ds.
Poolw
aste
rrifi
can
dth
epla
yroom
was
per
fect
.Lo
catio
n19
2•W
ehad
aw
onder
fultim
e.M
yw
ife
love
sfish
ing
and
the
cottag
ew
asin
aper
fect
loca
tion
togo
fish
ing
on
the
pie
r.•T
he
hom
eis
som
ewhat
dat
edbutth
elo
catio
nw
asgr
eat.
We
hav
eva
catio
ned
inth
eChat
ham
area
for
man
ysu
mm
ers
and
this
time
itw
asgr
eatto
o.
Bed
room
,bed
s18
6•T
he
fam
ilysu
iteis
huge
with
aki
ng
size
and
twin
bed
and
still
room
for
apac
k-n-p
lay.
The
kitc
hen
and
din
ing
room
wer
ehuge
,cl
ean,
and
nic
e!•T
he
only
com
men
tsth
atI
would
say
toth
eneg
ativ
eis
the
frontbed
room
by
the
frontdoor
alw
ays
seem
eddam
pw
hic
hse
emto
attrac
tm
ore
bugs
.K
itchen
,ki
tchen
war
e,dis
hes
,dis
hw
asher
178
•All
ofth
eam
eniti
esin
the
house
are
upsc
ale—
love
dth
eki
tchen
and
pla
sma
TV
s!•T
he
kitc
hen
had
good
bas
icute
nsi
ls,dis
hes
and
cooki
ng
pan
s.
151
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
152 P. Racherla et al.
● Negative reviews reflect the extent of expectation–disconfirmation thatthe consumers face during their stay in the rental property. Thesereviews have greater proportion of words such as “disappointment,”“not as expected,” etc. Further, extremely negative sentiment words(e.g., “disappointed,” “discomfort,” “filthy”) and adjectives are morelikely to be associated with features in low reviews than in highreviews.
● Almost 40% of the consumers in the high reviews express their willing-ness to return to the property as opposed to only 9% of the consumersin the negative reviews. Further, more than 20% of the consumers inthe high reviews express their strong desire to recommend the prop-erty to their friends and family members as opposed to only 5% in thelow reviews.
DISCUSSION
In this section, we reiterate our research questions and discuss how theresults address individual questions.
RQ1: Do overall ratings reflect the perceived quality of the serviceproviders (as derived from the consumers’ expressed sentiment in thereviews). What are the underlying patterns and the distribution of theoverall ratings in online reviews?
The overall distribution of the reviews in Flipkey.com tends to be heavilyskewed towards the positive ratings. However, this type of distribution isat odds with the U-shaped distribution of ratings usually seen in productoriented sites such as Amazon.com and Tripadvisor.com. At the same time,these findings reflect the general observations in previous studies regardingthe distribution of online consumer reviews. Four possible explanations canbe given for this distribution:
● In general, consumers are highly satisfied with the rental propertiesand this satisfaction is reflected in greater number of high reviews.
● Majority of the properties that are listed on the Web site have outstand-ing perceived quality and provide satisfactory services to customers.While this may be true for some properties, it is a weak argumentwhen applied to all the properties listed on the Web site (Kadet, 2007).
● The reviews available on the Web site do not seem to be the truerepresentative sample of consumer opinion. This can happen dueto purchasing bias (Admati & Pfleiderer, 2004; since customers wentthrough the trouble of purchasing and experiencing the facilities, theytend to be positively biased towards their purchase). Consequently,consumers may be exaggerating and thereby inflating their polarizedviews against the instinct to provide moderate opinions (Admati &
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 153
Pfleiderer, 2004). Further, this could also happen due to underreport-ing bias (customers do not have strong enough incentives to take thetrouble of reporting their opinions and the few self-motivated individu-als who provide their reviews become a self-selected sample; Banerjee& Fudenberg, 2004). Consumers do not have sufficient incentives toreport their views in an online forum, and as a consequence, onlythose consumers who have polarized views are likely to invest thetime and effort required to write an online review (Talwar et al., 2007).
● The truncated distribution could also be a function of the review solic-iting strategy adopted by FlipKey.com. The firm sends personalizedinvitations to only those consumers whose stay has been verified byproperty managers. Recent studies (Gregoire & Fisher, 2008) suggestthat customers engage in socially desirable behavior even in imper-sonal interactions with the service providers (in this case, provide areasonably positive review even if the experience has not been up tothe mark).
On the whole, the truncated distribution suggests that there is a lack ofmoderate and balanced opinions, which could be perpetuated by the verynature of the review systems. For instance, it is to be expected that overa longer time span the number of negative reviews will decrease even fur-ther. As these reviews influence customer choices, they will avoid negativelyreviewed accommodations, which either go out of business or will receiveso few guests that not many will be available to write negative reviews any-more. Consequently, underreporting is not a bias anymore but simply anunintended consequence of the entire review system.
Further, the overall star rating typically used in traveler review sitesis probably not the most accurate indicator of service providers’ perceivedquality. This conclusion stems from the results in this study that show that thecorrelation between the star rating and ratings on other property attributes isgenerally less than 40%.
RQ2: What are the linkages among the various attributes of theservice providers and reviews?
The word count in the reviews is negatively associated with the overall rat-ing (i.e., consumers who are extremely dissatisfied tend to expend moretime and energy to extensively critique the property, its amenities, and ser-vices). The text analysis also suggests that negative reviews are typicallyargumentative and are justified more with greater number of reasons. Thisis in itself both a problem and an opportunity for service providers. It isa problem because these verbose comments will stay in the cyber-worldforever. Given that more and more consumers are using search engines,these comments will be more visible during the information search process.It is well known that in the online world, given the paucity of diagnostic
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
154 P. Racherla et al.
information, consumers pay more attention to negative comments more thanpositive comments (Ba & Pavlou, 2002). It is also an opportunity since ser-vice providers can use these verbose reviews as the basis to better improvetheir services. Similarly, price of the property seems to positively correlatewith extreme negative reviews (i.e., when the price is high, traveler expec-tations are heightened). If the properties do not match these expectations, itleads to extreme reactions. These findings reflect the well-established knowl-edge in services marketing the positive correlation between price and serviceexpectations. Upscale properties charging high prices need to be aware ofthe likelihood that they may attract greater scrutiny and subsequently nega-tive reviews even for the small lapses in service provision when comparedto those properties that charge less.
RQ3: What are the issues that consumers mostly talk about in thetextual portion of online reviews? Do these issues differ from theratings that the consumers have provided via the standard variablesprovided by the review sites?
Numerical ratings sometimes do not often capture the actual sentiment andthe variety of dimensions on which travelers evaluate properties. Sometimes,these numerical ratings can be deceiving and may not reveal the true qualityof the property. For instance, the text analysis reveals that location (e.g.,closeness to the beach) and cleanliness (e.g., swimming pool areas) aremore valued by the consumer than features such as check-in and service.Hence, the traveler may attribute more weight in some of the attributes thanothers. Similarly, in the negative reviews, issues such as lack of bedroomparaphernalia or kitchen items seem to be ranked higher in importance thanissues such as location and check-in. Therefore, even when a property scoreshigh on the latter dimensions, the overall rating tends to remain low. Anotherinteresting aspect that is revealed in the text analysis is the fact that evenwhen consumers have given very low ratings to a certain property, they havenot completely given up on the properties (9% in the low reviews still wouldlike to revisit the property). While this is not a huge number, it still suggeststhat consumers are willing to give a second chance if service providers arewilling to take the negative feedback into consideration and ensure thatservice delivery is significantly enhanced. Simply put, as previous literaturestrongly suggests, service recovery still plays an important role and nowservice providers have a chance since they have explicit written feedbackfrom consumers.
IMPLICATIONS
This study has important implications for both research and practice.
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 155
Managerial Implications
It is important travel advisory sites to consider some of the aforementionedbiases that exist in the review systems. While online review sites are gen-erally quite useful, some recent studies have found that consumers trustgovernment-sponsored tourism Web sites more than online recommenda-tions due to some of the biases inherent in these sites (Cox, Burgess, Sellitto,& Buultjens, 2009) If consumers consistently detect these biases in the reviewsites, it can lead to loss of credibility and trust (which is an important cur-rency to the online service providers). Consumers are generally smart andrational, and work around these biases. However, it adds to the cognitiveoverload that results from overwhelming choices and information. In fact,recent evidence (Harteveldt, Johnson, Stark, & Geldern, 2009) suggests thatconsumers are gradually defecting from online sites due to some of theabove-mentioned reasons. In this regard, site administrators as well as shouldtake steps to attract those consumers who are willing to provide balancedanalysis of the amenities and services. Site administrators should appreci-ate the fact that consumer reviews are probably one of the key features oftheir sites. Therefore, greater effort should go into improving their perceivedtrustworthiness and usefulness. Steps should be taken to understand reviewwriting behavior of consumers, and strong incentives should be developed toattract more balanced reviews. For instance, previous research on traditional(Voorhees, Brady, & Horowitz, 2006) as well as eWOM (Wang & Fesenmaier,2003) finds that time and effort, and service providers’ responsiveness arethe most important reasons cited by consumers for not engaging in any for-mal WOM. Review Web sites may need to develop more efficient interfacesthat ease consumers’ time and effort constraints. In addition, managers mustdevelop strategies that improve consumers’ perceptions of responsiveness (aperception that service providers care about the reviews, both positive andnegative). For instance, frontline employees should be trained to encourageconsumers to provide online feedback.
Review sites can develop better methods to aggregate, synthesize andpublish the review contents including the numerical ratings. First, managersshould take into consideration the low correlation (as well as the results fromthe textual analysis) between the overall ratings and consumer ratings on var-ious property attributes. It indicates that review sites need to expand on thelist of variables that accurately reflect consumers’ needs and expectationsregarding service providers. In addition to expanding the list of variables onwhich reviewers will rate the properties, reviews sites should also providethe individual consumer with tools to filter the properties based on thoseattributes that they feel is more important for a comfortable stay. Many sites(e.g., Buy.com) that sell electronics allow consumers to filter products basedon numerous variables. For instance, a consumer who wants to buy a dig-ital camera can sort the brands based on image quality, memory size, andprice in addition to other variables. Similar strategies could be used by travel
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
156 P. Racherla et al.
advisory sites, especially with respect to property attributes. For instance,for some travelers, cleanliness and speed of check-in could be attributes ofgreater importance than closeness to the beach and price. By averaging theoverall scores, such nuances are lost.
Further, managers may have to rethink the current practice of averag-ing all the ratings for a given property. However, such simple methods donot take into account the biases that are inherent in the rating systems. Forinstance, this study finds that there is a possible and significant underre-porting bias in online review sites. A customer’s decision to not post onlinereviews may carry important information that can assist the users of thesites to make more reliable inferences. The majority of today’s feedbackmechanisms do not publicly disclose the number of silent transactions (i.e.,transactions for which no feedback was posted by customers). Such informa-tion should become a part of a service provider’s online profile on rating sitesand other feedback mechanisms. It is important to provide more informationregarding the response rates (if not about the randomness of the sample) tohelp the consumers navigate through the clutter and get the information theywant and seek. Given that most consumers are reasonable, it is possible thatthey will consider response rates while assessing negative and positive rat-ings. For instance a property with an overall negative rating will be viewedmore favorably if the consumers feel the response rates are too low to reachan informed conclusion. Such strategies will engender greater consumer trustin the longer term.
It is important to apply modern semantic analysis and sentimentclassification techniques to study the textual content of the reviews andidentify various dimensions on which consumers evaluate vacation homes.Consumers not only consider numerical ratings but also read selectedreviews for their textual content. The details provided in the review text arein many ways a better reflection of customer satisfaction as it is entailedin perceived service quality. Service providers can better position them-selves using the dimensions uncovered through textual analysis and targetthe needs and preferences of their customers.
The results in this study show that negative reviews are typically asso-ciated with greater word count than positive reviews. It is useful to focusspecifically on these negative reviews since these reviews can be helpful toservice providers in uncovering patterns of deficiencies in the service stan-dards and delivery. These reviews should be treated the same as customercomment cards or letters (Milan, 2007). Further, customers who write suchlengthy negative reviews are most likely to spread negative WOM about theproperty. Lengthy reviews are excellent sources of information, and shouldbe used as the bases to understand the complaining behavior of the con-sumer. Word count may be used as a segmentation variable to identify thoseconsumers who are more involved and willing to provide balanced feed-back to both the review site as well as the property management. Various
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 157
other research methods such as telephone interviews or focus groups shouldbe used to elicit detailed feedback from such consumers. These actionsalso help in service recovery, post-purchase engagement and relationshipbuilding process.
Research Implications
Online review sites (and eWOM) in services are increasingly becoming animportant topic of interest. However, extant academic understanding of thisphenomenon in tourism and hospitality domain is relatively weak whencompared to other domains and the literature is sparse. Based on this study,we propose three potential areas for future research.
The truncated rating distribution generally seen in online review sitesis an important facet that entails further probing. Despite recent efforts(Yoo & Gretzel, 2008; Wang & Fesenmaier, 2003), not much is known aboutconsumers’ motivation to write online reviews in services oriented sites.These studies do uncover various antecedents of review writing behaviorbut suffer from a sampling bias since they survey a self-selected sampleof consumers who already provided reviews. As the results in this studyindicate, underreporting bias (lack of moderate and low-rated reviews) isstill prevalent in online review sites and can be problem going forward.Future research should investigate this interesting phenomenon and iden-tify sociopsychological factors that increase consumers’ propensity to writeonline reviews.
Future research should explore the impact of the presence (or lackthereof) of extremely negative reviews within a large corpus of positivereviews. The evidence on the impact of review valence (negative or posi-tive) is equivocal. Proponents of confirmatory bias (Dellaracos, Fan, & Wood,2005) suggest that consumers look for affirmative evidence supporting aproduct choice already made. If that is the case, positive reviews are morelikely to have a greater affect on consumer actions. On the other hand, thenotion of negativity bias (Ba & Pavlou, 2002; Mizerski, 1982) suggests thatwhen consumers are neutral, negative reviews tends to become more salientthan positive reviews. In the presence of large number of positive reviews,consumers specifically seek negative reviews that they feel will help themidentify specific problems with a service. This becomes an important factorgiven the intangible nature services. Understanding how consumers recon-cile the information provided by both negative and positive reviews hasimplications for the design and management of review sites.
The results in this study show that the amount of information in a review(word count) varies significantly with the ratings of the review. Word counthas important implications for consumers’ trust in the reviews. When con-sumers are willing to read and compare open-ended comments from otherconsumers, the amount of information in a review can matter. Research in
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
158 P. Racherla et al.
consumer behavior (e.g., Racherla, 2008; Kempf & Palan, 2006) suggeststhat WOM communication of high-involvement products is more persuasiveand increases the decision maker’s confidence when the message senderprovides greater number of reasons. Even when an increased amount ofinformation does not increase the accuracy of a prediction or diagnosis, itcan still lead to increased confidence in the decision. Longer reviews pro-vide more information and are likely to be perceived as more helpful andpersuasive than shorter reviews.
LIMITATIONS
This study relies on the data from a single Web site. Given that the unit ofanalysis is a review as opposed to the entire Web site, this lends certain valid-ity to the findings. Nevertheless, to enhance the generalizability and validityof the findings, similar techniques should be applied to reviews from a crosssection of Web sites and also various services. This will provide greaterinsights into the patterns of reviews in the tourism and hospitality industry.
The management of the property has some influence on whichcustomers are invited by FlipKey to contribute a review. It is quite possiblethat a verification process such as the one use by Flipkey could create aselection bias. However, Flipkey management clarified that the invitationsare sent by the Web site management and the only role played by theproperty managers is the verification. Any consumer who provides a valide-mail address and has stayed at a property receives an invitation to review.Consequently, there is no reason to question the fidelity of the data althoughsuch biases should be kept in mind while collecting objective data such asthe ones used in this study.
REFERENCES
Admati, A. R., & Pfleiderer, P. (1986). A monopolistic market for information. Journalof Economic Theory, 39, 400–438.
Archak, N., Ghose, A., & Ipeirotis, P. (2011). Deriving the pricing power of productfeatures by mining consumer reviews. Management Science, 57(8), 1485–1509.
Ba, S., & Pavlou, P. A. (2002). Evidence of the effect of trust building technologyin electronic markets: Price premiums and buyer behavior. MIS Quarterly, 26 ,243–268.
Banerjee, A., & Fudenberg, D. (2004). Word-of-mouth learning. Games andEconomic Behavior, 46(1), 1–22.
Bansal, H. S., & Voyer, P. A. (2000). Word-of-mouth processes within a servicespurchase decision context. Journal of Service Research, 3(2), 166–177.
Bearden, W. O., & Mason, J. B. (1984). An investigation of influences on consumercomplaint reports. Advances in Consumer Research, 11, 490–495.
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 159
Bearden, W. O., & Teel, J. E. (1983). Selected determinants of consumer satisfactionand complaint reports. Journal of Marketing Research, 20(1), 21–28.
Bronner, F., & de Hoog, R. (2011). Vacationers and eWOM: Who posts, and why,where, and what? Journal of Travel Research, 50(1), 15.
Chen, P. Y., Wu, S.-Y., & Yoon, J. (2004, December). The impact of online recommen-dations and consumer feedback on sales. Paper presented at the InternationalConference in Information Systems, Washington, DC.
Chevalier, J., & Mayzlin, D. (2006). The effect of word of mouth online: Online bookreviews. Journal of Marketing Research, 43, 345–354.
Cho, Y., Im, I., Hiltz, R., & Fjermestad, J. (2002). The effects of post-purchase evalu-ation factors on online vs. offline customer complaining behavior: implicationsfor customer loyalty. Advances in Consumer Research, 29(1), 318–326.
Corman, S., & Dooley, K. (2006). Crawdad Text Analysis System 1.2. Chandler, AZ:Crawdad Technologies.
Corman, S., Kuhn, T., McPhee, R., & Dooley, K. (2002). Studying complex discur-sive systems: Centering resonance analysis of organizational communication.Human Communication Research, 28(2), 157–206.
Court, D., Elzinga, D., Mulder, S., & Vetvik, O. J. (2009). The consumer decisionjourney. McKinsey Quarterly, 3. New York, NY: McKinsey Inc.
Cox, C., Burgess, S., Sellitto, C., & Buultjens, J. (2009). The role of user-generatedcontent in tourists’ travel planning behavior. Journal of Hospitality Marketing &Management, 18, 743–764.
Dabholkar, P. A. (2006). Factors influencing consumer choice of a “rating web site”:An experimental investigation of an online interactive decision aid. Journal ofMarketing Theory and Practice, 14(4), 259–273.
Dellarocas, C., Fan, M., & Wood, C. A. (2003, December). Self-interest, reciprocity,and participation in online reputation systems. 2003 Workshop in InformationSystems and Economics (WISE), Seattle, WA.
Dellarocas, C., & Narayan, R. (2006). A statistical measure of a population’s propen-sity to engage in post-purchase online word-of-mouth. Statistical Science, 21,277.
Dooley, K. J., & Corman, S. R. (2002). The Dynamics of Electronic Media Coverage.In B. S. Greenberg (Ed.), Communication and Terrorism: Public and MediaResponses to 9/11 (pp. 121–35). Cresskill, NJ: Hampton Press.
Duan, W., Gu, B., & Whinston, A. B. (2008). Do online reviews matter? An empiricalinvestigation of panel data. Decision Support Systems, 45, 1007–1016.
eMarketer. (2010). The role of customer product reviews. Retrieved from http://www.emarketer.com/(S(wc00ee55fkfmb5mi3ukvex45))/Article.aspx?R=1008019
Forrester. (2006). Forty facts about US online shoppers. Cambridge, MA: Author.Getty, J., & Thompson, K. (1995). The relationship between quality, satisfaction, and
recommending behavior in lodging decisions. Journal of Hospitality & LeisureMarketing, 2(3), 3–22.
Ghose, A., & Ipeirotis, P. G. (2008). Estimating the socio-economic impact of productreviews: Mining text and reviewer characteristics. New York, NY: New YorkUniversity.
Grégoire, Y., & Fisher, R. J. (2008). Customer betrayal and retaliation: When your bestcustomers become your worst enemies. Journal of the Academy of MarketingScience, 36 , 247–261.
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
160 P. Racherla et al.
Harteveldt, H. H., Johnson, C., Stark, E., & Geldern, K. V. (2009). Using digitalchannels to calm the angry traveler. Cambridge, MA: Forrester Research.
Hu, N., Zhang, J., & Pavlou, P. A. (2009). Overcoming the J-shaped distribution ofproduct reviews. Communications of the ACM , 52(10), 144–147.
Jensen, A. R. (1969). How much can we boost IQ and scholastic achievement?Harvard Educational Review 39, 1–123.
Kadet, A. (2007). Rah-rah ratings online. SmartMoney Magazine. Retrieved fromhttp://www.smartmoney.com/spend/rip-offs/rah-rah-ratings-online-20837/
Kempf, D. A. S., & Palan, K. M. (2006). The effects of gender and argument strengthon the processing of word-of-mouth communication. Academy of MarketingStudies Journal, 10(1), 1–18.
Klayman, J., & Ha, Y. W. (1987). Confirmation, disconfirmation, and information inhypothesis testing. Psychological Review, 94(2), 211–228.
Li, X., & Hitt, L. M. (2008). Self-selection and information role of online productreviews. Information Systems Research, 19, 456–474.
Lim, B. C., & Chung, C. M. Y. (2011). The impact of word-of-mouth communicationon attribute evaluation. Journal of Business Research, 64(1), 18–23.
Litvin, S. W., Goldsmith, R. E., & Pan, B. (2008). Electronic word-of-mouth inhospitality and tourism management. Tourism Management, 29, 458–468.
Milan, R. (2007). 10 things you can do in response to traveler reviews. Retrievedfrom http://www.hotelmarketing.com/index.php/content/article/070920_10_things_you_can_do_in_response_to_traveler_reviews/
Mizerski, R. W. (1982). An attribution explanation of the disproportionate influenceof unfavorable information. Journal of Consumer Research, 9(3), 301–310.
Mudambi, S. M., & Schuff, D. (2010). What makes a helpful online review? A studyof customer reviews on Amazon.com. MIS Quarterly, 34(1), 185–200.
Murray, K. B., & Schlacter, J. L. (1990). The impact of services versus goods onconsumers’ assessment of perceived risk and variability. Journal of the Academyof Marketing Science, 18(1), 51–65.
Nielsen. (2009). Global advertising: Consumers trust real friends and virtual strangersmore. Retrieved from http://blog.nielsen.com/nielsenwire/consumer/global-advertising-consumers-trust-real-friends-and-virtual-strangers-the-most/
Pavlou, P. A., & Dimoka, A. (2006). The nature and role of feedback text commentsin online marketplaces: Implications for trust building, price premiums, andseller differentiation. Information Systems Research, 17 , 392–414.
Racherla, P. (2008). Factors influencing consumers’ trust perceptions of online prod-uct reviews: A study of the tourism and hospitality online product review systems.Philadelphia, PA: Temple University.
Richins, M. L. (1982). An investigation of consumers’ attitudes toward complaining.Advances in Consumer Research, 9, 502–506.
Schlossberg, H. (1991). Customer satisfaction: Not a fad but a way of life. MarketingNews, 25(20), 22.
Sparks, B. A., & Browning, V. (2010). Complaining in cyberspace: The motives andforms of hotel guests’ complaints online. Journal of Hospitality Marketing &Management, 19, 797–818.
Stringam, B. B., & Gerdes, J., Jr. (2010). An analysis of word-of-mouse ratingsand guest comments of online hotel distribution sites. Journal of HospitalityMarketing & Management, 19, 773–796.
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13
What Determines Consumers’ Ratings of Service Providers? 161
Sundaram, D. S., Mitra, K., & Webster, C. (1998). Word-of-mouth communications: Amotivational analysis. Advances in Consumer Research, 25(1), 527–531.
Talwar, A., Jurca, R. F., & Faltings, B. (2007, June). Understanding user behav-ior in online feedback reporting. Paper presented at the Electronic CommerceConference, New York, NY.
Vermeulen, I. E., & Seegers, D. (2009). Tried and tested: The impact of online hotelreviews on consumer consideration. Tourism Management, 30(1), 123–127.
Voorhees, C. M., Brady, M. K., & Horowitz, D. M. (2006). A voice from the silentmasses: An exploratory and comparative analysis of noncomplainers. Journalof the Academy of Marketing Science, 34, 514.
Wang, Y., & Fesenmaier, D. R. (2003). Assessing motivation of contribution inonline communities: An empirical investigation of an online travel community.Electronic Markets, 13(1), 33–45.
Westbrook, R. (1987). Product/consumption-based affective responses and post-purchase processes. Journal of Marketing Research, 24, 258–270.
Wetzer, I. M., Zeelenberg, M., & Pieters, R. (2007). “Never eat in that restaurant, Idid!”: Exploring why people engage in negative word-of-mouth communication.Psychology and Marketing, 24, 661–680.
Xiang, Z., & Gretzel, U. (2009). Role of social media in online travel informationsearch. Tourism Management, 31(2), 179–188.
Yoo, K. H., & Gretzel, U. (2008). What motivates consumers to write online travelreviews? Information Technology and Tourism, 10, 283–295.
Yoo, K. H., & Gretzel, U. (2010). Influence of personality on travel-related consumer-generated media creation. Computers in Human Behavior, 27 , 609–621.
Yuksel, A., & Yuksel, F. (2001). Measurement and management issues in customersatisfaction research: Review, critique and research agenda: part one. Journal ofTravel and Tourism Marketing, 10(4), 47–80.
Zeithaml, V. A. (1981). How consumer evaluation processes differ between goodsand services. In J. H. Donnelly & W. R. George (Eds.), Marketing of Services(pp. 191–199). Chicago, IL: American Marketing Association.
Zhu, F., & Zhang, X. (2010). Impact of online consumer reviews on sales: Themoderating role of product and consumer characteristics. Journal of Marketing,74(2), 133–148. doi: 10.1509/jmkg.74.2.133
Dow
nloa
ded
by [
Dea
kin
Uni
vers
ity L
ibra
ry]
at 1
3:27
29
Sept
embe
r 20
13