response variation in e-mail surveys: an exploration

42
RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION by Kim Bartel Sheehan Assistant Professor, University of Oregon and Sally J. McMillan Assistant Professor, University of Tennessee Direct correspondence to the first author at: Kim Sheehan 1275 University of Oregon Eugene, OR 97403 [email protected] Phone: 541-346-2088 Fax: 541-346-3462

Upload: vanphuc

Post on 14-Feb-2017

236 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

by Kim Bartel Sheehan

Assistant Professor, University of Oregon

and Sally J. McMillan

Assistant Professor, University of Tennessee

Direct correspondence to the first author at:

Kim Sheehan1275 University of OregonEugene, OR 97403

[email protected]: 541-346-2088Fax: 541-346-3462

The authors wish to thank Mariea Hoy and Charles Frazer for their guidance

in conceptualizing and conducting the research projects reported in this study.

Page 2: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Abstract

As e-mail and other related technologies have diffused rapidly into a large and

heterogeneous population, researchers have begun to explore the possibility of using e-mail as a

tool for survey research. However, studies of the technique have primarily compared response

rates for studies that use both e-mail and postal mail survey techniques. Research into e-mail as a

survey method needs to develop the kind of richness that is found in the literature on traditional

postal mail survey methods. This article takes a first step in developing that literature with an in-

depth comparison of three studies that used e-mail surveys for data collection. Details are

provided on methods for sampling and survey techniques. Hypothesized relationships between

issue salience and response rate, and between pre-survey notification and response time, were

generally supported.

2

Page 3: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

Researchers’ investigation of computer-mediated communications as a tool for

conducting research and collecting consumer data has been on the increase as Internet usage

among people around the world continues to grow. Today, as many as 100 million people

worldwide have access to e-mail (DOC, 1998), 80 percent of all users log on to the Internet on a

daily basis, and the demographic profile of Internet users in the United States is beginning to

mirror that of the general population (Kehoe, Pitkow and Morton, 1997). Data collected via

home page surveys on the World Wide Web (such as the Georgia Tech studies) are the most

publicized efforts for collecting information via the Internet. However, researchers (e.g.

Bachmann, Elfrink and Vazzana, 1996; Weible and Wallace, 1998; Schaefer and Dillman, 1998)

recently have begun to analyze the use of electronic mail (e-mail) to disseminate surveys and

collect data. Research into the viability of e-mail as a survey method has focused primarily on

comparing response rates and response speeds of e-mail surveys to those of postal mail. Overall,

these studies suggest that e-mail has great potential for survey researchers.

Researchers have reported a wide variation in response rate and speed of response for e-

mail surveys (see Table 1). This is not surprising because of the variety of sample populations

and research topics reported in those studies. A researcher planning an e-mail survey today has

minimal information on which to base estimations of response rate and therefore will have

difficulty in determining sample size. The limited published research on e-mail methodology

also provides little information to assist researchers with other basic research design issues such

as questionnaire development and respondent contacts. Researchers have a wealth of information

on response effects for postal mail surveys but the literature addressing such effects for e-mail

surveys is minimal.

3

Page 4: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

The purpose of this study is to provide researchers with information that can assist in the

design and implementation of e-mail survey research. The literature on response effects in postal

mail surveys provides a framework for discussion of key design issues. Sampling and survey

techniques for three studies that used e-mail surveys are described in detail. Finally, we examine

the impact of topic salience and pre-notification, two key predictors of response in a postal mail

surveys, on response rate of e-mail surveys.

Review of the Literature

E-Mail Surveys

E-mail has been characterized as a “promising means for conducting future surveys”

(Schaefer and Dillman, 1998), and numerous researchers have recognized the benefits that e-

mail provides over postal mail. These benefits include cost savings from elimination or reduction

of paper costs and mailing costs (Parker, 1992) and the rapid speed of response (Bachmann,

Elfrink and Vazzana, 1996; Mehta and Sivadas, 1995). In fact, a consistent finding of the studies

that compare response speeds of surveys delivered via e-mail and postal mail is that e-mail

responses are returned much more quickly than postal mail responses (Bachmann, Elfrink and

Vazzana, 1996; Kiesler and Sproull, 1986; Schaefer and Dillman, 1998; Weible and Wallace,

1998). In these studies, e-mail response speeds ranged from five to ten days, compared to the

response speed of postal mail surveys, which ranged from ten to fifteen days (see Table 1).

Response rates to e-mail surveys, however, do not consistently show benefits over postal

mail, and in some cases fall below what may be seen as acceptable levels of response. Kiesler

and Sproull (1986) and Parker (1992) reported e-mail response rates of over 65 percent, with

both studies showing e-mail response rates significantly higher than the comparable postal mail

method. Schaeffer and Dillman (1998) and Mehta and Sivadas (1996) found no significant

4

Page 5: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

differences in response rates between the two modes. Several other studies (e.g. Schuldt and

Totten, 1994; Tse et al, 1995; Weible and Wallace, 1998) found that e-mail response rates were

lower than those of postal mail. Response rates for e-mail surveys (see Table 1) vary from a low

of 6 percent (Tse et al, 1995) to a high of 75 percent (Kiesler and Sproull, 1986).

These differences in response rates are not surprising given what is known about

response effects in postal mail surveys. The studies shown in Table 1 have homogeneous

samples, small sample sizes, and diverse survey topics. The types of sample populations are

either employees of a single company (used in two studies) or University professors and Deans

(used in five studies), with only one study consisting of a sample of Internet users (Mehta and

Sivadas, 1995). Survey topics ranged from corporate and Internet communication to business

ethics and TQM. Given the lack of consistency in numerous variables in these studies, the range

of response rates and speeds is understandable.

What is missing from the current body of research is a comparison of e-mail survey

responses beyond the simple comparison to response rate of postal mail surveys. The body of

knowledge about postal mail survey methodology suggests a number of issues that must be

considered during the design and implementation of a postal survey and that have the potential

to effect response rate and speed. These effects may also be relevant for e-mail surveys.

Postal Mail Surveys

A review of the relevant literature regarding postal mail methodology suggests that

numerous design and implementation issues may effect both response rate and speed in this

mode. The literature is rich in meta-analyses that provide indications of such effects, and many

of these issues will also be relevant for e-mail studies. The literature has reported the following

effects in postal mail surveys:

5

Page 6: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

Personalization of cover letter. Personalizing letters addressed to specific individuals has

been shown to increase response rates in some postal mail surveys (Dillman, 1978; 1991), while

others (Duncan, 1979) found no effect on response rate due to cover letter personalization. In e-

mail surveys, the issue of personalization is complex. A certain degree of personalization will be

automatic in e-mail because the individual’s e-mail address will appear on a header that is often

visible throughout the reading of a message (Schaefer and Dillman, 1998). Beyond this,

however, e-mail can be personalized with a greeting or some other type of relevant information

that relates specifically to the recipient.

Postage. The consensus among researchers appears to be that including a stamped

envelope (versus a business reply envelope) produces higher response rates in postal mail

surveys (Armstrong and Lusk, 1987; Fox, Crask and Kim, 1988; Yammarino, Skinner and

Childers, 1991). This effect is not yet relevant to e-mail because postage is not yet needed.

However, individuals who pay for e-mail usage either by the message or by the amount of time

spent online, may feel that the researcher should provide some small reimbursement for that

cost. These costs may also limit the response potential, as e-mail recipients may automatically

delete the message in order to avoid such costs. Finding a way to address these issues may

challenge researchers.

Incentives. Small cash incentives sent with the mailed survey can increase response rate

(Fox, Crask and Kim, 1988; Goyder, 1982; Yu and Cooper, 1983). However, diminishing

returns on the size of the incentive are evident, indicating that increasing the size of the incentive

does not necessarily increase the response rate. It is not currently possible to provide monetary

incentives through e-mail, although it is possible to provide other types of incentives (such as the

offer of sharing research results). Researchers should consider ways to develop possible

6

Page 7: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

incentives that might be “attached” to e-mail. For example, discount coupons from an online

vendor might be promised to individuals who complete the survey.

Sponsorship. Meta-analyses (Fox, Crask and Kim, 1988; Goyder, 1982; Heberlein and

Baumgartner, 1978) suggest that sponsorship of a study by a University can result in higher

response rates for postal mail surveys than can sponsorship from a corporation. However,

Yammarino, Skinner and Childers (1991) did not find support for the value of a University

sponsorship to effect response rate. Sponsorship of e-mail surveys cannot be as explicit as with

postal mail surveys (i.e. the use of a sponsoring organization’s letterhead is not available), but

sponsorship can be made implicitly through statements in the survey instrument and through the

sender’s e-mail addresses (i.e. an “.edu” suffix on an address would indicate association with an

educational institution).

Questionnaire design. Design issues, such as the length of the questionnaire, can effect

response. The longer the questionnaire, the less likely people are to respond (Heberlein and

Baumgartner, 1978; Steele, Schwendig and Kilpatrick 1992; Yammarino, Skinner and Childers,

1991). This effect is highly relevant to e-mail surveys, where survey length may be measured not

only in the number of printed pages but also in terms of screen length (the number of screens

containing the survey). Because an average printed page can take up two or three computer

screens, respondents may be faced with presumably lengthy surveys of a dozen screens or more.

Anonymity. While some researchers have found that anonymity increases response rates

to postal mail surveys (Yammarino, Skinner and Childers, 1991), other studies have indicated

that this is not necessarily true (Duncan, 1979; Kanuk and Berenson, 1975). This is a key issue

for e-mail surveys because it is difficult to achieve true anonymity in that mode. To do so

requires respondents to access anonymous remailers to respond, and this may be beyond the

7

Page 8: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

technical competence of some Internet users. However, researchers can assure e-mail survey

respondents of confidentiality by informing them that their e-mail addresses will not be recorded

with their survey responses and that the survey data will be considered only in the aggregate.

Issue Salience. In postal mail surveys, the salience of an issue to the sampled population

has been found to have a strong positive correlation with response rate. Salience was defined as

topic that dealt with an important issue that was also current or timely (Martin, 1995). Heberlein

and Baumgartner (1978) found that issues salience had a stronger impact on response rate than

did any other issue or research design decision including advance notice, follow-up contacts, or

monetary incentives. Roberson and Sandstorm (1990) and Martin (1995) also found that salience

was a key predictor of response rate for postal mail surveys. Understanding the population to be

sampled is an important first step in determining issue salience. Researchers who use e-mail

surveys may be able to begin to predict response rate on the basis of how salient an issue is to

the individuals who will be solicited to participate in the e-mail survey.

Respondent Contacts. Fox, Crask and Kim (1988) found that pre-notification by letter led

to increases in response rates for postal mail surveys. However Heberlein and Baumgartner

(1978) found little or no effect associated with pre-notification. Several studies of postal mail

surveys (Kanuk and Berenson, 1975; Murphy, Daley and Dalenberg, 1991; Taylor and Lynn,

1998) found response speed was faster for pre-notified respondents than for those who were not

pre-notified. Yammarino, Skinner and Childers (1991) suggested that follow-up mailings and

repeated contacted seemed to have a greater effect on response rates among those who receive

the survey because of an institutional affiliation than among those who receive a general

consumer survey. Little consensus was found on the value of multiple pre- and post-survey

8

Page 9: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

contacts in postal mail-based surveys. Researchers using postal mail for delivery of messages

must weigh the potential benefit on response rate against the cost of multiple mailings.

Because speed of response has been seen as a key benefit to e-mail surveys, enhancing

response speed is important for researchers who wish to maximize the potential of the mode.

And because most researchers can send multiple e-mail messages for little or no cost, the impact

of multiple contacts on response becomes a highly relevant subject for e-mail surveys.

Hypotheses

This study represents a first step in examining e-mail on the basis of methodological

factors that grow from the rich literature on postal surveys. Two hypotheses regarding response

effects have been developed based on the final two factors reviewed above.

H 1: Rate of response to e-mail surveys will increase as issue salience increases.

H 2: Speed of response to e-mail surveys will be faster from individuals who received a pre-notification of the survey than from those who did not receive pre-notification.

Methodology

Three separate studies that used e-mail for collection of data were examined for this

article. Study 1 was conducted in early 1997; individuals invited to respond to the survey were

all developers of health-related Web sites. Study 2 was conducted in the summer of 1996;

respondents were faculty and students at a major southeast research university. Data for Study 3

were collected from November 1997 through January 1998 from Internet users with a personal

e-mail account in the United States. Despite the different sampling frames, the studies were

similar in several ways (see Table 2). The survey instruments were comparable in terms of

length of the survey and types of scales used to answer the surveys. All studies mentioned a

University affiliation, and used a follow-up reminder message. Research results were offered in

9

Page 10: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

all studies as an incentive, and all studies promised confidentiality of responses. However, the

studies differ in two key ways.

First, the studies differ in issue salience. The topic of Study 1 was highly salient to the

subject population. Study 1 asked creators of health-related Web sites to provide information

about the site they had created (e.g. when it began, purpose of the site, etc.) as well as more

general information about the individual’s perception of the Web. Thus, the individuals had a

direct personal interest in most of the questions. The topic of Study 2 was moderately salient to

the population. The topic was introduced to respondents as a study of Internet usage habits, and

the salience for this group was the fact that the individual collecting the information was a

student at the university where the respondents were affiliated. The topic of Study 3 was not

salient to the subject population. It was presented as a “doctoral student survey” of Internet

usage habits.

Second, the studies differ in terms of pre-notification. Study 1 did not pre-notify

respondents, Studies 2 and 3 did send a pre-notification message. This pre-notification e-mail

message explained the purpose of the research and notified subjects that they would receive a

survey within a designated time period. Subjects were told that they could decline to participate

by replying to this first message and asking that the survey not be sent. This technique is similar

to direct marketing practices used by organizations such as book and record clubs that default to

sending an item unless the consumer declines. Less than two percent of the university faculty,

staff and students of Study 2 declined to participate. Slightly more than 11 percent of the

individuals with personal e-mail accounts in Study 3 declined to participate in this survey.

Because the procedures for conducting e-mail surveys are relatively new, additional

methodological information is provided to guide researchers who wish to use this technique. In

10

Page 11: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

particular, we provide information on sampling and survey techniques used for these three

studies.

Sampling

Study 1 (Web-Site Developers). The Yahoo directory of health-related Web sites was

used as the universe for Study 1. Yahoo does not sequentially number sites within its categories.

Therefore, a strategy was needed for identifying the number of sites in the universe, randomly

selecting sites, and determining how to apply random numbers to specific sites. The size of the

universe was determined by adding totals that Yahoo reported for each health-related category

on the “opening page” (or index) of the Yahoo category listings. At the time of the study, the

numbers following each of these categories were added to obtain the universe size of 14,794.

A list of 1,050 unique random numbers ranging from 1 to 14,794 was drawn. These

numbers were applied by starting at the top of the first on-line page of Yahoo listings and

working through to the bottom of that page. For example, the first item on the first page of the

health category was a topic listing for Alternative Medicine. When that listing was selected,

another screen appeared which included additional sub-categories (e.g., institutes, magazines,

etc.) as well as direct links to sites. Sub-categories were followed until they resulted in a site

listing.

Four factors led to reduction of the sample size. First, some sites had to be eliminated

because they duplicated either the URL (uniform resource locator – the address of the site) or the

e-mail address of another site. In each case, the first site with the duplicate URL and/or e-mail

address was kept and the second was discarded. Second, some URLs listed by Yahoo were not

functioning. Third, some sites did not have an e-mail address. Finally, in some cases an e-mail

address was found but e-mail was undeliverable; 18.6 percent of the selected sites could not be

11

Page 12: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

contacted because of missing or non-functioning e-mail addresses. After adjusting for the above

factors, the total valid sample size for Study 1 was 834.

Study 2 (University Faculty, Staff and Students). The university directories of faculty,

staff and students provided the sample for Study 2. These directories provide names, addressees,

phone numbers, and, where available, e-mail and Web page addresses for individuals listed. A

total of 580 names were selected; two thirds (386) were from the faculty/staff directory and one-

third (194) were from the student directory. A random number was used to select the first page

from which to draw names. Beginning on that page, every eighth name was selected until the

desired number of names was reached. If the eighth name did not have an e-mail address it was

skipped and replaced with the next entry that included an e-mail address.

Study 3 (Individuals with Personal E-mail Accounts). A two-stage sampling technique

was used for Study 3. First, the researcher randomly selected Internet Service Providers from an

on-line list. During this selection, the researcher checked to ensure that the ISP provided service

to individuals rather than businesses. A total of 55 ISPs were selected in this stage.

In the second stage, the ISP domain name (e.g. earthlink.com) was entered in the Four11

search engine. This search engine is designed to find individuals’ e-mail addresses. The search

engine could search using any combination of first name, last name, domain name, state, and

country. (Since the time that this study was undertaken, the Four11 search engine was sold to

Yahoo. The search parameters allowed by the service have been changed).

For Study 3, the only search parameters were that individuals must be in the United

States and that they must have had an account with a specific ISP. With these parameters,

Four11 produced a list of persons who had an e-mail address with the specified ISP. If 200 or

fewer individuals used that ISP for e-mail, all e-mail addresses were provided. If more than 200

12

Page 13: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

individuals had e-mail accounts with the ISP, a random sample of 200 e-mail addresses was

generated. In this case, the total number of individuals with e-mail addresses was also reported.

E-mail lists provided by the Four11 search engine were sampled as needed to ensure that

for each ISP the survey recipients would be drawn at random. Three types of sampling were

used. First, if the Four11 search engine reported that more than 1,000 e-mail addresses were

provided by the ISP, then the 200 random e-mail addresses returned by Four11 were used.

Second, if the Four11 search engine returned 200 random e-mail addresses but reported that

1,000 or fewer individuals had an e-mail account with the ISP, we felt it necessary to sample. In

this case, one-fourth of the 200 e-mail addresses was selected at random. Finally, if fewer than

200 e-mail addresses were returned, we sampled from the list provided by Four11. One-fourth of

these addresses were selected at random. Using these sampling techniques, 5,000 e-mail

addresses were selected to receive the survey. Pre-tests indicated that a substantial percentage of

e-mail addresses would no longer be active. In this case, 1,276 of the e-mail addresses were not

deliverable.

Survey Techniques

As previously mentioned, many standard survey techniques were used in all three studies.

All subjects were given the option not to respond. Confidentiality of respondents was assured.

The introduction to the survey included an estimate of how much time would be required for

completing the survey. However, these e-mail surveys differed from postal survey techniques in

other ways. One major difference between these studies and postal survey research is cost.

Primary costs for the on-line surveys were paper and printer cartridges for printing results and

time for uploading and downloading messages. We estimate that postal mail surveys cost at least

13

Page 14: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

10 times as much as e-mail surveys. Three additional techniques are different in the e-mail

survey format: sending e-mail, reminders, and responding.

Sending E-mail. For Study 1, individual messages were sent to each subject. E-mail

addresses were copied to the e-mail message from a previously created database. While this was

time consuming, it enabled the researcher to keep records of when messages were sent and

received in the same database that was used to record responses. Using this technique about 200

surveys were mailed each day. Limiting outbound messages to 200 per day reduced the

possibility of overloading either the e-mail system or the researcher’s e-mail in-box.

The technology of sending surveys by e-mail tempts the researcher to “batch” e-mail

addresses. Study 2 used this technique. The researcher created a list of subjects and sent the

message to all of them in one message. While this technique saves a great deal of time on the

front end of the project, it can create problems when subjects respond. Despite instructions,

some respondents used the “reply to all” function of their e-mail reader. This resulted in possibly

annoying messages being received by all survey subjects. Some bad will generated from these

messages was transferred to the researcher.

For Study 3, a program was written to merge a list of e-mail addresses with the survey

and to send those surveys via e-mail. This technique eliminated the problem of multiple

recipients for the survey message and it was more efficient than the technique used for Study 1.

Using this technique, about 400 surveys were mailed each day. However, there is a potential

downside to this type of mass mailing. Both the sending and receiving computer systems may

register such mass mailings as “spam” or junk mail. In fact, this did happen. The researcher’s

university account was temporarily suspended until she could assure the system manager that she

was conducting legitimate research. Furthermore, one of the ISPs set up a filter that prohibited

14

Page 15: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

her from communicating with anyone who had an e-mail account through that ISP. These

addresses were then replaced in the sampling pool.

Reminders. As with postal surveys, a reminder was sent if a completed survey had not

been received in a specified time. This aspect of the methodology reveals a significant relative

advantage of e-mail surveys. With postal surveys, the researcher must allow time for the survey

to be received and returned via postal service mail before sending a reminder. Often, a month or

more elapses between first and second mailings. Furthermore, responses and reminders might

“pass” in the mail. During the time it takes for these transactions to occur, the subject may forget

whether he or she has responded to the survey.

The reminder mailing was sent from five to seven days after the original e-mail survey

was sent. Because of the speed with which messages could be sent, there was little danger of

messages “passing” in e-mail. While some respondents indicated that they had not received the

first message, many simply apologized for being busy and quickly responded to the reminder.

The reminder messages sent for these studies generated from 23 to 48 percent of the total

responses.

Responding. Some individuals reported having difficulty in responding to the survey.

After e-mail exchanges with these individuals, it seemed that the primary problem was that they

did not select the “reply” function of their e-mail program. They seemed to be unaware that they

could not type responses to the survey while they were still in the “read” mode of their e-mail

program. Others did not realize that they could insert their responses into the body of the survey.

This may seem odd for people who probably use e-mail with some regularity. We suspect that

one reason for this problem may have been the novelty of an e-mail survey. Somehow,

individuals did not make the connection that this was just like any other e-mail – one must select

15

Page 16: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

the reply function before responding and responses can be interspersed with the original

message.

Respondents to all three studies were offered the option of printing out the e-mail survey

and completing and returning it in this hard copy version. This option was offered because some

subjects might not feel comfortable with the process of completing the survey via e-mail, or

might appreciate the anonymity of sending the survey through postal mail. For all surveys, less

than three percent of respondents chose this option; most of these respondents included their

names on the paper copy. This indicated to us that anonymity is not much of an issue for these

on-line research subjects.

Finally, respondents seemed to be much more willing to reply to open-ended questions in

the e-mail format than in traditional paper surveys. The reason may be that most of these

respondents find it faster and easier to type responses than to hand write them. Also, in some

cases, respondents used the cut-and-paste functionality of digital communication to provide

answers to open-ended questions. For example, Study 1 asked respondents the purpose of their

Web sites. Many copied their mission statements from their Web sites and pasted them into the

e-mail survey. The novelty of the e-mail survey may have also empowered individuals to add

commentary to responses using scales. For example, many respondents added commentary to

explain why they selected a certain response on a question.

Findings

In comparing these three studies, our primary interest was in how issue salience and pre-

notification might effect response rate and response speed. We expected that groups receiving

the more salient surveys would have higher response rates and that individuals who had received

a pre-notification message might be “primed” for the survey and thus respond more quickly.

16

Page 17: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

Hypothesis 1

Hypothesis 1 predicted that rate of response to e-mail surveys will increase as issue

salience increases. Therefore, it was expected that the response rate for Study 1 would be higher

than Study 2, and the response rate for Study 2 would be higher than Study 3. In fact, response

rates for the three studies were (in order from Study 1 to Study 3): 47.4, 47.2, and 24.0 percent.

Table 3 summarizes information about sample size and response rate for the three studies.

Analysis of Variance revealed significant differences between Study 1 and Study 3 and between

Study 2 and Study 3 (F=138.13, p < .001). These differences were in the expected direction.

Hypothesis 1 is therefore supported.

Issue salience appears to have a positive effect on response rates for e-mail surveys. As e-

mail technology begins to diffuse throughout the population, researchers must insure that the

universe from which they select e-mail addresses is one that represents individuals for whom the

research topic will be salient. As the base of e-mail users becomes more heterogeneous,

researchers have both more opportunities and more challenges as they attempt to match potential

respondents with salient issues. Particular attention should be paid to the subject line of e-mail

messages as well as to the description of the survey that is provided in pre-notification messages

and in the introduction to the survey.

Hypothesis 2

Hypothesis 2 predicted that speed of response to e-mail surveys will be faster from

individuals who received a pre-notification of the survey than from those who did not receive

pre-notification. Therefore, it was expected that participants in Study 1 would respond more

slowly than respondents in Study 2 and Study 3. Response speeds for these studies are (in order

from Study 1 to Study 3) 4.99 days, 4.57 days, and 3.57 days. Analysis of Variance revealed

17

Page 18: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

significant differences between Study 1 and Study 3 (F=10.84, p<.001). However, there was also

a significant difference between Study 2 and Study 3, and not a significant difference between

Study 1 and Study 2. Hypothesis 2 is therefore partially supported.

Pre-notification appeared to increase response speeds for Study 2 and Study 3, although

only Study 3 was significantly different than Study 1, which did not pre-notify respondents.

However, the two studies with pre-notification were also significantly different from each other.

One reason for the somewhat slower response for Study 2 could be that the survey was sent to

the University community during the summer. Although accounts were checked prior to sending

the survey to make sure recipients had recently used their e-mail, the summer schedule may have

resulted in a decrease in the frequency of checking e-mail for members of this sample.

Discussion

E-mail surveys have arrived. It is time for researchers to go beyond simple comparison of

e-mail and postal mail response rate and response speed. Researchers must begin to focus on why

response rate, response speed, and other methodological issues vary among e-mail surveys. This

articles provides an important first step in this methodological development by examining

factors that may impact on response rate and response speed.

Another methodological consideration highlighted in this article is the importance and

challenge of identifying appropriate universes from which to sample. Studies reported in this

article used both on-line and off-line directories to create a sampling pool. On-line sources of e-

mail addresses may offer some unique challenges in terms of drawing a random sample. For

example, selecting e-mail addresses from the list of Web sites provided by Yahoo required

multiple steps and tedious hand counting. But, in some cases, the on-line sources can streamline

the researcher’s job. For example, the random lists of e-mail addresses provided by the Four11

18

Page 19: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

search engine automated the process of selecting a random sample. As e-mail diffuses,

researchers should experience little difficulty in finding sources from which to draw samples.

The key challenge will be to ensure that those lists are representative of the universe from which

the researchers wish to sample. Another challenge will be to anticipate the percentage of

unusable addresses that may be generated from those lists.

The studies reported in this article illustrate the fact that many of the standard research

techniques used for postal survey research can be easily applied to e-mail surveys. For example,

reminder mailings can be used to boost the overall response rate. Similarly, the announcement

message sent prior to the survey appears to speed the response time for the survey.

Although salience of topic appeared to effect response rates, it is important to note that

all three studies addressed Internet-related topics; thus sampling the Internet population was

appropriate in order to assess attitudes and opinions of those individuals directly involved with

and impacted by the technology. What is not known, however, is how respondents would react

to non-Internet related research. As the Internet population begins to mirror the total population

in the United States, it may be tempting to survey current users on all types of subjects of

interest to Advertising researchers. We would caution those studying broader issues that the

results reported in this article may not necessarily apply to their specific field of interest.

Some technological problems still need to be addressed. For example, researchers need to

make sure they are neither accused of nor guilty of the kind of “spamming” that can cripple mail

servers. Despite assurances of academic propriety in conducting Study 3, many individuals

indicated they did not believe the researcher was actually an academic researcher. Instead they

believed that the message had been sent by someone who was engaged in the practice of

19

Page 20: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

“sugging” (selling under the guise of research). Until more legitimate researchers use this

method and it receives credibility among the Internet public, similar situations could arise.

Additionally, wording needs to be developed that will clarify the technical process of

responding to an e-mail survey. While postal mail surveys are filled out using a writing

instrument, e-mail surveys allow for multiple response formats such as inserting the cursor into

the desired point and typing in an answer, answering via a “form,” and providing answers in a

separate message. All formats may not work on all systems (for example, some AOL members

could not respond by inserting the cursor at a specific point in the text).

A potential limitation to this study is that the degree of personalization varied somewhat

among the studies even though all surveys were personalized to some degree. In Study 1,

respondents were requested to answer questions regarding a specific Web site that they had

developed, and the specific URL was provided to each potential respondents. This may have

increased both personalization and issue salience. Studies 2 and 3 did not have this type of

personalization. Additionally, Study 2 was mailed using a batch mail program. With this

method, the recipient’s name is included in the study, yet the mail program may indicate that he

or she was part of a mailing “list.” Whether this type of mailing technique effects response rates

as either a primary or moderating effect should be researched further.

Findings related to e-mail surveys should encourage researchers. Using e-mail,

researchers can quickly assess consumer opinion. Further research should focus on ways to

increase response rates and capitalize on one of the most evident benefits of e-mail surveys: the

fast response speed. Even among “slower” respondent groups, most surveys are answered in less

than one work week.

20

Page 21: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

In conclusion, e-mail surveys offer an exciting opportunity for researchers: costs are low,

response rates are good, and response times are quick. As researchers apply these techniques to

many types of advertising questions, the research community can work together to refine e-mail

survey methodology.

21

Page 22: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

Table 1. Summary of Survey Research Methods Using E-mail

Author Sample Survey Topic Method Sample Size

Total Responses

Response Rate

Response Time (days)

Kiesler & Sproull (1986)

Employees of a Fortune 500 Company

Corporate Communication

MailE-MailTotal

115115230

77 86163

67%75%

10.89.6

Parker (1992) Employees of AT&T

Internal Communication

MailE-MailTotal

70 70140

274875

38%68%

NANA

Schuldt & Totten (1994)

Marketing & MIS Professor (US)

Shareware Copying MailE-MailTotal

200218418

113 42155

56.5%19.3%

NANA

Mehta & Sivadas (1995)

Usenet Users Internet Communication

MailE-MailTotal

309182491

173 99272

56.5%*54.3%*

NANA

Tse, et al (1995) University Population (Hong Kong)

Business Ethics MailE-MailTotal

200200400

541266

27%6%

9.798.09

Bachman, Elfrink & Vazzana (1996)

Business School Deans

TQM MailE-MailTotal

224224448

147117264

65.6%52.5%

11.184.68

Weible and Wallace (1998)

MIS Professors (US)

Internet Use MailFaxE-MailWeb FormTotal

200200200200800

705048

52220

35.7%20.9%29.8%32.7%

12.98.86.17.4

Schaefer and Dillman (1998)

University Faculty Unknown MailE-MailTotal

226226452

130131262

57.5%58.0%

14.399.16

* Differences not significant

22

Page 23: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

Table 2. Summary of the Three Studies

Study 1 Study 2 Study 3

Sampling frame Creators of health-related Web sites

Faculty, staff and students at a major research university

Individuals with personal e-mail

accounts

Study topic Values of site creators; site purpose and

funding

Attitudes toward on-line privacy

Attitudes and behaviors

associated with on-line privacy

Time Frame January-February 1997

May-July, 1996 November 1997-January 1998

Number of questions 35 45 36

Number of computer screens the subject must review (using Pine)

16 20 18

Mailing method Individual Batch Merge program

Solicitation sent before survey

No Yes Yes

Reminder sent to non-responders after how many days

7 6 5

Percent of total responses that came after the reminder

48% 25% 23%

Percent responding using postal rather than e-mail

2.7% 1.8% 1.7%

23

Page 24: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

Table 3. Survey Response Rate

Valid Sample Size

Total Responses Response Rate

Study 1 – Web Developers 834 395 47.4

Study 2 – University 580 274 47.2

Study 3 –Individuals with Personal e-mail accounts

3,724 895 24.0

24

Page 25: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

References

Armstrong J.S. and E. J. Lusk. “Return Postage in Mail Surveys, A Meta-analysis.” Public Opinion Quarterly 51 (987): 233-248.

Bachmann, D., J. Elfrink and G. Vazzana. “Tracking the Progress of E-mail Versus Snail Mail.” Marketing Research 8 (1996): 31-35.

Department of Commerce. “The Emerging Digital Economy.” (1998). Available: http://www.ecommerce.gov/danintro.htm

Dillman, D. A. “The Design and Administration of E-mail Surveys.” Annual Review of Sociology 17 (1991): 225-49.

Dillman, D. A. Mail and Telephone Surveys: The Total Design Method. New York: John Wiley and Sons, 1978.

Duncan, W. J. “Mail Questionnaires in Survey Research: A Review of Response Inducement Techniques.” Journal of Management 5, 1 (1979): 39-55.

Fox, R., M.R. Crask, and J. Kim. “Mail Survey Response Rates: a Meta-Analysis of Selected Techniques for Inducing Response.” Public Opinion Quarterly 52(1988): 467-491..Goyder, J. C. “Further Evidence on Factors Affecting Response Rates to Mailed Questionnaires.” American Sociological Review 47 (1982): 550-553.

Heberlein, T. A and R. Baumgartner. “Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature.” American Sociological Review 43 (1978): 447-462.

Kanuk, L. and C. Berenson. “Mail Surveys and Response Rates: A Literature Review.” Journal of Marketing Research 12 (1973): 44-53.

Kehoe, C., J. Pitkow, and K. Morton. “Eighth WWW User Survey.” (1997). Available: http://www.cc.gatech.edu/gvu/user_surveys/survey-04-1997/ .

Kiesler, S. and L.S. Sproull. “Response Effects in the Electronic Survey.” Public Opinion Quarterly 50 (1986): 402-413.

Martin, C. L. “The Impact of Topic Interest on Mail Survey Response Behavior.” Journal of the Market Research Society 36, 4(1994): 327-337.

Mehta, R. and E. Sivadas. “Comparing Response Rates and Response Content in Mail Versus Electronic Surveys.” Journal of the Market Research Society 4, 37 (1995): 429-440.

25

Page 26: RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Response Variation in E-Mail Surveys

Murphy, P. R., J. Daley and D. R. Dalenberg. “Exploring the Effects of Postcard Prenotification on Industrial Firms’ Response to Mail Surveys.” Journal of the Market Research Society 33, 4 (1991): 335-345.

Parker, L. “Collecting Data the E-mail Way.” Training and Development July (1992): 52-54.

Roberson, M.T. and E. Sundstrom. “Questionnaire Design, Return Rates, and Response Favorableness in an Employee Attitude Questionnaire.” Journal of Applied Psychology 75(1990): 354-357.

Schaefer, D. R. and D. A. Dillman. “Development of a Standard Email Methodology: Results of an Experiment." Public Opinion Quarterly 3, 62 (1998): 378-390.

Schuldt, B.A. and J. Totten. “Electronic Mail vs. Mail Survey Response Rates.” Marketing Research (1994): 1-7.

Steele, T. J., W. L. Schwendig and J. A. Kilpatrick. “Duplicate Responses to Multiple Survey Mailings: A Problem?” Journal of Advertising Research March/April (1992): 26-34.

Taylor, S, and P. Lynn. “The Effect of a Preliminary Notification Letter on Response to a Postal Survey of Young People.” Journal of the Market Research Society 2, 40 (1998):165-178.

Tse, A., K.C. Tse, C.H. Yin, C.B. Ting, K.W. Yi, K.P Yee, and W.C. Hong. “Comparing Two Methods of Sending Out Questionnaires: E-mail versus Mail.” Journal of the Market Research Society, 4, 37 (1995): 441-445.

Weible, R. and J. Wallace. “The Impact of the Internet on Data Collection.” Marketing Research 10, 3 (1998): 19-23.

Yammarino, F.J., S. Skinner and T. L. Childers. “Understanding Mail Survey Response Behavior.” Public Opinion Quarterly 55 (1991): 613-639.

Yu, J. and H. Cooper. “A Quantitative Review of Research Design Effects on Response Rates to Questionnaires.” Journal of Marketing Research 20 (1983): 36-44.

26