response rates and representativeness: a lottery incentive improves physician survey return rates

7
pharmacoepidemiology and drug safety 2005; 14: 571–577 Published online 3 June 2005 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/pds.1126 ORIGINAL REPORT Response rates and representativeness: a lottery incentive improves physician survey return rates Jane Robertson BPharm, PhD 1 *, Emily J. Walkom BA (Psyc), PhD 1 and Patricia McGettigan MD, MRCPI, FRACP 2 1 Discipline of Clinical Pharmacology, School of Medical Practice and Population Health, Faculty of Health, The University of Newcastle, NSW, Australia 2 Division of Medicine, Newcastle Mater Hospital, Waratah, NSW, Australia SUMMARY Purpose To test the effect of a $AU 2 scratch lottery ticket on response rates to a national mailed questionnaire of Australian general practitioners (GPs) and medical specialists. Methods A randomized controlled trial was conducted and the incentive sent to half of the participants with the first mailing. A single follow-up mailing without incentive was sent to all non-respondents. Survey respondents were then informed of the research question regarding incentives and allowed to withdraw their study data. Differences in response rates between doctors receiving and not receiving the incentive, and between respondents and non-respondents, were examined. Results The overall response rate was 47% (443 respondents). Twenty-two respondents (5%) withdrew their data after being informed of the research question. Of the remaining 421 respondents, 233 had received the incentive (response rate 49.7%) and 188 had not (40.1%, p ¼ 0.0032). The absolute increase in response rate with the incentive (9.6%, 95%CI 3.2, 15.9) was quantitatively similar in effect to the reminder mailing (11.8%). The incentive had a larger effect among the GP sample compared with specialists (13.4 vs. 5.9%), although the difference was not statistically significant (p ¼ 0.20). There were no systematic differences in demographic characteristics between respondents and non-respondents. Conclusions Increased response rates associated with a small incentive may reduce the need for a second mailed reminder, but strong views about the use of incentives may negatively influence the participation of some practitioners. While the overall response rate was low, there was no evidence of bias in our sample. Copyright # 2005 John Wiley & Sons, Ltd. key words — response rates; questionnaire; incentive; medical practitioners; health services research; methodology INTRODUCTION Surveys and questionnaires are frequently used for health services research. However, there is evidence that the volume of unsolicited requests to participate in research, particularly in primary health care, is adversely affecting participation rates. 1,2 To be useful, surveys need to be returned in sufficient numbers, so maximizing response rates is an important aspect of study design. 3 While three to four reminders are often recommended, 4,5 increasingly ethics commit- tees are viewing this number of contacts as possible harassment of research subjects. 5 There is no agreed standard for an acceptable minimum response rate, but survey response rates of 75% or more are considered good. 6 Surveys with response rates less than 50% may be difficult to publish. However, it is often difficult to achieve high response rates in surveys of medical practitioners. Received 23 March 2005 Revised 30 March 2005 Copyright # 2005 John Wiley & Sons, Ltd. Accepted 25 April 2005 * Correspondence to: Dr J. Robertson, Clinical Pharmacology, Level 5, Clinical Sciences Building, Newcastle Mater Hospital, Waratah, NSW 2298, Australia. E-mail: [email protected] Contract grant sponsor: The University of Newcastle.

Upload: jane-robertson

Post on 06-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

pharmacoepidemiology and drug safety 2005; 14: 571–577Published online 3 June 2005 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/pds.1126

ORIGINAL REPORT

Response rates and representativeness: a lottery incentiveimproves physician survey return rates

Jane Robertson BPharm, PhD1*, Emily J. Walkom BA (Psyc), PhD1

and Patricia McGettigan MD, MRCPI, FRACP2

1Discipline of Clinical Pharmacology, School of Medical Practice and Population Health, Faculty of Health,The University of Newcastle, NSW, Australia2Division of Medicine, Newcastle Mater Hospital, Waratah, NSW, Australia

SUMMARY

Purpose To test the effect of a $AU 2 scratch lottery ticket on response rates to a national mailed questionnaire ofAustralian general practitioners (GPs) and medical specialists.Methods A randomized controlled trial was conducted and the incentive sent to half of the participants with the firstmailing. A single follow-up mailing without incentive was sent to all non-respondents. Survey respondents were theninformed of the research question regarding incentives and allowed to withdraw their study data. Differences in responserates between doctors receiving and not receiving the incentive, and between respondents and non-respondents, wereexamined.Results The overall response rate was 47% (443 respondents). Twenty-two respondents (5%) withdrew their data afterbeing informed of the research question. Of the remaining 421 respondents, 233 had received the incentive (response rate49.7%) and 188 had not (40.1%, p¼ 0.0032). The absolute increase in response rate with the incentive (9.6%, 95%CI 3.2,15.9) was quantitatively similar in effect to the reminder mailing (11.8%). The incentive had a larger effect among the GPsample compared with specialists (13.4 vs. 5.9%), although the difference was not statistically significant (p¼ 0.20). Therewere no systematic differences in demographic characteristics between respondents and non-respondents.Conclusions Increased response rates associated with a small incentive may reduce the need for a second mailed reminder,but strong views about the use of incentives may negatively influence the participation of some practitioners. While theoverall response rate was low, there was no evidence of bias in our sample. Copyright # 2005 John Wiley & Sons, Ltd.

key words—response rates; questionnaire; incentive; medical practitioners; health services research; methodology

INTRODUCTION

Surveys and questionnaires are frequently used forhealth services research. However, there is evidencethat the volume of unsolicited requests to participatein research, particularly in primary health care, is

adversely affecting participation rates.1,2 To be useful,surveys need to be returned in sufficient numbers,so maximizing response rates is an important aspectof study design.3 While three to four reminders areoften recommended,4,5 increasingly ethics commit-tees are viewing this number of contacts as possibleharassment of research subjects.5

There is no agreed standard for an acceptableminimum response rate, but survey response rates of75% or more are considered good.6 Surveys withresponse rates less than 50% may be difficult topublish. However, it is often difficult to achieve highresponse rates in surveys of medical practitioners.

Received 23 March 2005Revised 30 March 2005

Copyright # 2005 John Wiley & Sons, Ltd. Accepted 25 April 2005

*Correspondence to: Dr J. Robertson, Clinical Pharmacology, Level5, Clinical Sciences Building, Newcastle Mater Hospital, Waratah,NSW 2298, Australia. E-mail: [email protected]

Contract grant sponsor: The University of Newcastle.

Mean response rates of 54% for physicians and 68%for non-physicians were found in surveys publishedin US medical journals more than a decade ago.7 Asample of mailed physician questionnaires publishedbetween 1986 and 1995 showed a mean response rateof 61%, with only 52% response rates for larger studies(>1000 observations).8 Response rates have become aconventional and convenient proxy for assessmentsof bias, however, there is no necessary relationshipbetween response rates and bias; surveys with lowresponse rates may provide a representative sample ofthe population of interest.7

Humans take notice of rewards and incentives.Witness the success of national lotteries and purchase-related bonus schemes. Incentives may also returnbenefits for researchers. A recent systematic reviewsuggested that the use of a monetary incentive doubledthe odds of response to a postal questionnaire (OR2.02,95%CI 1.79, 2.27), and that inclusion of an incentivewith the questionnaire was more effective than pro-viding incentives subject to return (OR 1.71, 95%CI1.29, 2.26).9 A positive relationship was shown be-tween the size of the incentive and the odds of response,the response odds with a $US1 incentive being twicethat of no incentive. There was a diminishing marginalbenefit, however, with a $US15 incentive increasingthe response odds only 2.5 times compared with noincentive. A number of studies in the review includeddoctors and other health professionals.10–15 Morerecent studies in the U.S.16,17 and Hong Kong18 sup-port a positive influence of financial incentives onphysicians’ response rates to postal surveys.Given the often low response rates of medical

practitioners to unsolicited mail questionnaires and theresistance of ethics committees to the use of multiplereminders to study subjects, efforts to maximizeresponse rates will be worthwhile. The practice ofoffering a small financial inducement to take part inresearch is more common in the U.S. than in Europeand elsewhere.6 Apart from one study using thepossibility of a prize as an incentive19, the effect ofmonetary incentives on physician survey responserates has been little explored in Australia. We con-ducted a randomized controlled trial to test the effectof a small lottery-type incentive ($AU2, equivalent to$US1.59, s1.19, £0.83 in March 2005) on responserates to a national survey we were conducting ex-ploring the uptake of 19 new drugs into the clinicalpractices of Australian general practitioners (GPs) andspecialists. The $AU2 incentive was chosen becauseit fitted within the budget constraints of the studyand there is evidence that the size of the incentiveis less important than the fact that an incentive is

provided.20 Respondents and non-respondents werecompared to try to identify any systematic differencesthat might compromise the generalizability of ourstudy conclusions.Our first hurdle was securing Ethics Committee

approval for the study. We could not inform studysubjects of this aspect of our research in the coveringletter inviting study participation as this would havecompromised the results. Members of the EthicsCommittee felt that this constituted deception undertheNational Statement on Ethical Conduct in ResearchInvolving Humans (June 1999, Section 17) as noteveryonewas being offered the incentive, doctors werenot informed of the randomization and why, and therewere concerns that the study on incentives was being‘piggy-backed’ onto the main study. After severalmonths of correspondence with the Human ResearchEthics Committees of the University of Newcastle andHunter Area Health Service, approval was grantedsubject to our sending a ‘debriefing’ letter to all studyrespondents to disclose the deception. The letter in-formed participants of the additional arm to the study,and offered them the opportunity to withdraw theirdata from the study. We report here our experiencewith the use of a $AU2 scratch lottery incentive and itsimpact on response rates.

METHODS

A random sample of 527 GPs and 527 specialists wasidentified manually from the Medical Directory ofAustralia (MDA), a publicly available list of GPsand specialists. Specialists were drawn from 13 spe-cialties likely to write or initiate a reasonable numberof prescriptions (see Figure 1), in numbers propor-tional to their representation in the MDA and acrossstates. A random sample of 50 psychiatrists wasincluded in our list of 527 specialists.Our questionnaire was based on instruments used

and themes identified in recent United Kingdomstudies exploring new drug use,21–24 adapted for usein the Australian setting, and pilot tested among bothGPs and specialists. GPs and specialists were rando-mized to receive/not receive a $AU2 scratch lotteryticket whichwas attached to the individually addressedcovering letter inviting study participation. A reply-paid envelope was provided for the return of thequestionnaire. A single reminder letter with a secondcopy of the questionnaire was sent to non-respondents3–4 weeks after the first mailing. The incentive wasprovided only with the first mailing. As we needed tosend ‘debriefing’ letters to respondents, the question-naires were numbered and therefore not anonymous.

572 j. robertson ET AL.

Copyright # 2005 John Wiley & Sons, Ltd. Pharmacoepidemiology and Drug Safety, 2005; 14: 571–577

STATISTICAL ANALYSIS

To compare respondents and non-respondents, we col-lected basic demographic data (gender, years sincegraduation with primary medical qualification, placeof graduation) from the MDA on all study subjects.Chi-square analysis or Fisher’s exact test was usedto examine demographic differences between GPsand specialists, respondents and non-respondents,and those receiving and not receiving the incentive.Differences in response rates were examined using acomparison of independent proportions. Odds ratiosand 95%CI were calculated. Tests of significance weretwo-tailed. StatsDirect1 statistical software (Version1.9.8) was used for all calculations.

RESULTS

Of 1054 questionnaires distributed, 938 (89%) wereeligible for inclusion after exclusion of undeliverableenvelopes and subjects no longer in active clinicalpractice (Figure 1). The eligible sample included464 GPs and 474 specialists with half of each groupreceiving/not receiving the incentive. The first mailingresulted in 330 responses (response rate 35.2%); thefollow-up mailing increased total responses to 443(47%). The ‘debriefing’ letters prompted 22 responsesfrom participants requesting their data be withdrawnfrom the study (5%; 11 GPs, 11 specialists), resultingin a final sample of 421 respondents (44.9% responserate)—186 GPs and 235 specialists.

Total questionnairesdistributedN = 1054*

Returned to sender (not eligiblefor inclusion)

n = 116(58 incentive, 58 no incentive)

Questionnaires eligible for inclusionN = 938

(469 incentive, 469 no incentive;464 GPs, 474 specialists)

Total responses aftersecond mail-out443/938 (47%)

Withdrawalsn = 22 (5% of responses)

Responses (incentive)233/469 (49.7%)

Responses (no incentive)188/469 (40.1%)

Final response rate421/938 (44.9%)

(186 GPs, 235 specialists)

Total responses after first mail-out330/938 (35.2%)

Figure 1. Flow chart of responses to questionnaire. *The sample comprised 527 GPs and 527 specialists; specialists were drawn from 13specialties likely to prescribe drugs (cardiology, clinical pharmacology, endocrinology, gastroenterology, general physician, geriatrics,immunology and allergy, infectious diseases, neurology, renal medicine, respiratory medicine, rehabilitation medicine, and rheumatology)and included a random sample of 50 psychiatrists

incentives and survey response rates 573

Copyright # 2005 John Wiley & Sons, Ltd. Pharmacoepidemiology and Drug Safety, 2005; 14: 571–577

Most of those withdrawing their data did so withoutfurther comment. In three cases, respondents expres-sed strong negative views about the research (‘veryoffensive,’ ‘a disgraceful time-wasting activity,’ or notapproving of the tactics and refusing to respondto future questionnaires). There were also negativecomments from some of the 16 respondents who didnot withdraw their data. While one described theincentive as an insult to doctors, most wrote simply toinform us that the incentivewas of no value outside thestate of New South Wales, or comment on the possibleimpact of the incentive on the results. The effects ofthe incentive and follow-up mailing are shown graphi-cally in Figure 2.There was a statistically significant difference in

response rates according to the use of the incentive—49.7% (233/469) for those receiving the incentiveversus 40.1% (188/469) for those not receiving theincentive (p¼ 0.0032). The odds for response in-creased almost 50% when the incentive was used (OR1.48, 95%CI 1.13, 1.93); the absolute increase inresponse rate was 9.6% (95%CI 3.2, 15.9).The response rates for GPs and specialists were

40.1% (186/464) and 49.6% (235/474), respectively(p¼ 0.0032). The incentive had a quantitatively largereffect among the GP sample (difference in responserates for incentive versus no incentive was 13.4%,95%CI 4.5, 22.1) compared with the specialist sample(5.9%, 95%CI 3.1, 14.8). However, the difference inincentive response rates between specialist and GP

groups was not statistically significant (p¼ 0.20).There were no demographic differences in respondentGPs and specialists classified by incentive or no in-centive groups (Table 1). Similarly, there were nodifferences in the distribution of specialties betweenrespondent and non-respondent groups (data not shown).Regional differences were not detected (data notshown).There were statistically significant differences in

years since graduation between respondents and non-respondents (w2¼ 11.52, df¼ 3, p¼ 0.0092), althoughthe test for linear trend was not significant (w2¼ 0.0309,df¼ 1, p¼ 0.86) (Table 2). Significant baselinedifferences existed between GPs and specialists ingender (85% specialists, 64% GPs were male), yearssince graduation (no specialist had graduated<10 yearspreviously compared with 4% of GPs) and place ofgraduation (more Australian/New Zealand graduatesand fewer Asian medical school graduates in thespecialist group; data not shown).

DISCUSSION

The inclusion of the $AU2 scratch lottery incentivesignificantly increased response rates among our sam-ple of Australian GPs and specialists. While the incen-tive was said to be of dubious value in states other thanNSW (because of perceived difficulties in claimingany lottery prize for residents in other states), theabsolute increase in response rate of 9.6% was similar

0

50

100

150

200

250

300

350

400

450

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Week

Cu

mu

lati

vere

spo

nse

s

Total responses

Incentive

No incentive

Follow up mailing

Follow -up mailout

Debrief letters sent

Figure 2. Cumulative responses to the questionnaire

574 j. robertson ET AL.

Copyright # 2005 John Wiley & Sons, Ltd. Pharmacoepidemiology and Drug Safety, 2005; 14: 571–577

in magnitude to the impact of the reminder mailing(11.9%), an effect seen in other studies.10,14 Althoughthe 5% of respondents who withdrew their dataafter the debriefing letter were not evenly distributedacross the incentive and no incentive groups, exclu-sion of these data did not alter the study conclusions.Thus the costs of providing the incentive may beweighed favorably against the costs of a second re-minder mailing. The incentive approach also reducesthe ‘harassment’ reminder mailings inflict on researchsubjects, an issue of concern to human research ethicscommittees.5

The price of this efficiency may be antagonism ofsome potential respondents, albeit small numbers,

given the few strongly held views encountered aboutthe use of incentives. Most of those withdrawing theirdata provided no further comments, so we cannot besure whether the concern related to the use of theincentive, the ‘deception’ of being included in a RCTexploring the effect of the intervention, or both. It ispossible that we underestimated the extent of doctordisquiet with incentives in this study, with doctorsunwilling to ‘waste’ more time expressing their views.However, we did receive a number of letters expressingdisapproval but not to the point of withdrawing studydata, so it is difficult to be sure of the true extent ofstrongly negative reactions. We are unaware of anyother published studies, which have canvassed the

Table 2. Demographic characteristics of survey respondents and non-respondents

Characteristic Number (%) of subjects

Initial sample (n¼ 1054) Respondents (n¼ 421) Non-respondents (n¼ 633)

GenderMale 786 (74.6) 310 (73.6) 476 (75.2)Female 268 (25.4) 111 (26.4) 157 (24.8)

Years since graduation#

<10 years 20 (1.9) 3 (0.7) 17 (2.7)10–19 years 246 (23.3) 94 (22.3) 152 (23.9)20–29 years 435 (41.3) 195 (46.3) 240 (37.9)�30 years 353 (33.5) 129 (30.6) 224 (35.4)

Place of graduationAustralia/NZ 871 (82.6) 348 (82.7) 523 (82.6)Asia* 59 (5.6) 23 (5.5) 36 (5.7)Other 124 (11.8) 50 (11.8) 74 (11.7)

#Respondent versus non-respondent; w2¼ 11.52, df¼ 1, p¼ 0.0092; w2 for linear trend, 0.0309, df¼ 1, p¼ 0.86.*Includes Indian sub-continent.

Table 1. Demographic characteristics of respondent GPs and specialists

CharacteristicNumber (%) of subjects

GPs Specialists

No incentive (n¼ 77) Incentive (n¼ 109) No incentive (n¼ 111) Incentive (n¼ 124)

GenderMale 45 (58.4) 71 (65.1) 92 (82.9) 102 (82.3)Female 32 (41.6) 38 (34.9) 19 (17.1) 22 (17.7)

Years since graduation<10 years 0 (0) 3 (2.8) 0 (0) 0 (0)10–19 years 19 (24.7) 29 (26.6) 22 (19.8) 24 (19.4)20–29 years 44 (57.1) 55 (50.5) 40 (36.0) 56 (45.2)�30 years 14 (18.2) 22 (20.2) 49 (44.2) 44 (35.5)

Place of graduationAustralia/NZ 62 (80.5) 93 (85.3) 87 (78.4) 106 (85.5)Asia* 5 (6.5) 6 (5.5) 7 (6.3) 5 (4.0)Other 10 (13.0) 10 (9.2) 17 (15.3) 13 (10.5)

*Includes Indian sub-continent.

incentives and survey response rates 575

Copyright # 2005 John Wiley & Sons, Ltd. Pharmacoepidemiology and Drug Safety, 2005; 14: 571–577

possibility and extent of negative sentiments to the useof incentives.It is generally accepted that the size of incentive

required to increase survey response rates needs notbe large.10,12,14,16 The incentive is a modest tokenof appreciation and response may be motivated bythe respondent’s perception that the investigatorsacknowledge, however nominally, the time and efforttaken to respond to the research survey.17 However,there are many influences on response rates, a keyissue being the perceived interest and importance ofthe research topic.25 Perhaps the small monetary in-centive ‘buys’ a few seconds of time during which thepractitioner considers whether the topic is of sufficientinterest to give up further time to complete the survey.Despite the positive effects of the incentive, the

response rates achieved werewell below the 70 to 80%response rates many consider adequate.8 The studyemployed numerous strategies considered to impactpositively on response rates7,9,14—university co-spon-sorship, attractive presentation format, personalizedletter of invitation, a follow-up reminder with a secondcopy of the questionnaire, and provision of a reply-paidenvelope for return of the questionnaire. We used aninstrument that required few written responses, waspilot tested in the target audience, and addressed atopical issue. While the study protocol meant thatresponses were not anonymous, there is evidencethat survey responses are lower when the study isanonymous.7

While there were some differences in years sincegraduation between respondents and non-respondents,there was no consistent trend towards more or lessexperienced clinicians (Table 2). The higher responserate from specialists (49.6 vs. 40.1%) has been reportedin other studies18 and may suggest that specialistsare approached less often than GPs to participate insuch surveys, as well as time and interest pressuresin general practice that might preclude participation inresearch projects. There is anecdotal evidence of survey‘fatigue’ among Australian doctors, especially GPs.Research projects like ours are competing for doctors’limited timewith better remunerated studies conductedby pharmaceutical companies and marketing firms.20

The 168 ‘undeliverable’ questionnaires in this studywere in part due to the use of the MDA 2000 and themobile nature ofGPs.26 In 52 cases, an updated addresswas located. Twenty-two completed questionnaireswere returned (45.8%), and of these 12 had received theincentive, 10 had not. These proportions are similar tothose in the main study, suggesting that systematicdifferences in subjects receiving and not receiving thequestionnaire were unlikely.

The response rate in this study is consistent with thatseen elsewhere for physician (GP and specialist)mailed surveys.7,8We could find no consistent patternsof differences between respondents and non-respon-dents that would compromise the generalizability ofthe conclusions of our survey study. Rather than usingresponse rates as the sole measure of bias, an assess-ment should be made of the representativeness of therespondents. The degree of bias will depend on severalthings—the percentage of non-responders, the degreeto which they differ from the study population and,importantly, the degree to which biases identified asstatistically significant are relevant to the study aim.3,7

Where no obvious biases exist, surveys with low tomoderate response rates are likely to provide valid andreliable answers to the research questions of interest.The $AU1054 ($US636, s625, £434) spent on

scratch lottery tickets increased the response rate byalmost 10%; the additional 45 responses each ‘costing’$AU23.42 ($US18.57, s13.89, £9.65) above the costof responses without the use of an incentive. However,the additional costs were offset by the reduced numberof reminder letters and questionnaires used in thesecond mailing (608 rather than repeat mailing to all938 eligible subjects). The debriefing letter wouldbe avoided where all study subjects received theincentive.This study adds to the evidence from other settings

that the provision of a small incentive significantlyincreases the odds of doctors’ responding to a mailedquestionnaire survey. The ‘debriefing’ letter permittedus to assess the ‘acceptability’ of the method. Whilethe incentive may antagonize a small minority ofpotential respondents, it represents a simple andeffective intervention that may reduce the need formultiple reminder letters, another source of annoyancefor some participants. In the absence of demonstrablesystematic differences between survey respondentsand non-respondents, we conclude that our studysample, although small by conventional standards, islikely to be representative of our target group of GPsand medical specialists.

KEY POINTS

� Small financial incentives can enhance responserates to physician surveys and reduce the need formultiple reminders.

� Incentives may offend a small proportion ofpotential respondents.

� Survey response rates below 75% may stillprovide representative study samples.

576 j. robertson ET AL.

Copyright # 2005 John Wiley & Sons, Ltd. Pharmacoepidemiology and Drug Safety, 2005; 14: 571–577

ACKNOWLEDGEMENTS

We thank Barrie Stokes for statistical advice. The pro-ject was funded by an Early Career Researcher Grantfrom The University of Newcastle.

REFERENCES

1. McAvoy BR, Kaner EFS. General practice postal surveys: aquestionnaire too far? Br Med J 1996; 313: 732–734.

2. Moore M, Post K, Smith H. ‘Bin bag’ study: a survey of theresearch requests received by general practitioners and the pri-mary health care team. Br J Gen Pract 1999; 49: 905–906.

3. Barclay S, Todd C, Finlay I, Grande G, Wyatt P. Not anotherquestionnaire! Maximizing the response rate, predicting non-response and assessing non-response bias in postal question-naire studies of GPs. Fam Pract 2002; 19: 105–111.

4. Dillman DA. Mail and Telephone Surveys: The Total DesignMethod. Wiley: New York, 1978.

5. Howell SC, Quine S, Talley NJ. Ethics review and use ofreminder letters in postal surveys: are current practices com-promising an evidence-based approach? MJA 2003; 178: 43.

6. Bowling A. Data collection methods in quantitative research:questionnaires, interviews and their response rates. InResearch Methods in Health: Investigating Health and HealthServices, Bowling A (ed.). Open University Press: Bucking-ham, UK, 1997; 227–270.

7. Asch DA, Jedrziewski MK, Christakis NA. Response rates tomail surveys published in medical journals. J Clin Epidemiol1997; 50: 1129–1136.

8. Cummings SM, Savitz LA, Konrad TR. Reported responserates to mailed physician questionnaires. Health Serv Res2001; 35: 1347–1355.

9. Edwards P, Roberts I, Clarke M, et al. Increasing responserates to postal questionnaires: a systematic review. Br Med J2002; 324: 1183–1191.

10. Asch DA, Christakis NA, Ubel PA. Conducting physician mailsurveys on a limited budget. A randomized trial comparing $2bill versus $5 bill incentives. Med Care 1998; 36: 95–99.

11. Berk MI, Edwards WS, Gay NL. The use of a pre-paid incen-tive to convert non-responders on a survey of physicians.Eval Health Prof 1993; 16: 239–245.

12. Deehan A, Templeton L, Taylor C, et al. The effect of cash andother financial inducements on the response rate of generalpractitioners in a national postal study. Br J Gen Pract 1997;47: 87–90.

13. Easton AN, Price JH, Telljohann SK, Boehm K. An informa-tional versus monetary incentive in increasing physicians’response rates. Psychol Rep 1997; 81: 968–970.

14. Everett SA, Price JH, Bedell AW, Jelljohann SK. The effectof a monetary incentive in increasing the return rate of asurvey to family physicians. Eval Health Prof 1997; 20:207–214.

15. Mullen P, Easling I, Nixon SA, et al. The cost-effectiveness ofrandomized incentive and follow-up contacts in a national mailsurvey of family physicians. Eval Health Prof 1987; 10: 232–245.

16. Halpern SD, Ubel PA, Berlin JA, Asch DA. Randomized trialof 5 dollars versus 10 dollars monetary incentives, envelopesize, and candy to increase physician response rates to mailedquestionnaires. Med Care 2002; 40: 834–839.

17. Donaldson GW, Moinpour CM, Bush NE, et al. Physician par-ticipation in research surveys. A randomized study of induce-ments to return mailed research questionnaires. Eval HealthProf 1999; 22: 427–441.

18. Leung GM, Ho LM, Chan MF, et al. The effects of cash andlottery incentives on mailed surveys to physicians: a rando-mized trial. J Clin Epidemiol 2002; 55: 801–807.

19. McLaren B, Shelley J. Response rates of Victorian generalpractitioners to a mailed survey on miscarriage: randomisedtrial of a prize and two forms of introduction to the research.Aust NZ J Public Health 2000; 24: 360–364.

20. VanGeest JB, Wynia MK, Cummins DS, Wilson IB. Effects ofdifferent money incentives on the return rate of a national mailsurvey of physicians. Med Care 2001; 39: 197–201.

21. Prosser H, Almond S, Walley T. Influences on GPs’ decision toprescribe new drugs—the importance of who says what. FamPract 2003; 20: 61–68.

22. Jones MI, Greenfield SM, Bradley CP. Prescribing new drugs:qualitative study of influences on consultants and general prac-titioners. Br Med J 2001; 323: 378–381.

23. Jones MI, Greenfield SM, Bradley CP, Jowett S. Prescribingnew drugs: a survey of hospital consultants in the West Mid-lands. Int J Pharm Pract 2000; 8: 285–290.

24. Ashworth M, Clement S, Wright M. Demand, appropriatenessand prescribing of ‘lifestyle drugs’: a consultation study ingeneral practice. Fam Pract 2002; 19: 236–241.

25. Stocks N, Gunnell D. A chain is as strong as its weakest linkbut that link could be the subject matter of the questionnaire!Fam Pract 2002; 19: 704.

26. Veitch C, Hollins J, Worley P, Mitchell G. General practiceresearch. Problems and solutions in participant recruitmentand retention. Aust Fam Physician 2001; 30: 399–406.

incentives and survey response rates 577

Copyright # 2005 John Wiley & Sons, Ltd. Pharmacoepidemiology and Drug Safety, 2005; 14: 571–577