designing indicators

46
Survey Design & Data Collection

Upload: clearsateam

Post on 20-Jun-2015

225 views

Category:

Education


0 download

DESCRIPTION

Designing Indicators

TRANSCRIPT

Page 1: Designing Indicators

Survey Design & Data Collection

Page 2: Designing Indicators

Lecture Overview

What should you measure?

What makes a good measure?

Measurement

Data Collection

Piloting

Page 3: Designing Indicators

WHAT SHOULD YOU MEASURE?

What do we measure and where does it fit into the whole project?

Page 4: Designing Indicators

What Should You Measure?

Follow the Theory of Change

• Characteristics: Who are the people the program works with, and what is their environment?Sub-groups, covariates, predictors of compliance

• Channels: How does the program work, or fail to work?

• Outcomes: What is the purpose of the program?

• Assumptions: What should have happened in order for the program to succeed?

List all indicators you intend to measure

• Use participatory approach to develop indicators (existing

instruments, experts, beneficiaries, stakeholders)

• Assess based on feasibility, time, cost and importance

Page 5: Designing Indicators

Methods of Data Collection

Administrative data

Surveys- household/individual

logs/diaries

Qualitative – eg. focus groups,

RRA

Games and choice problems

Observation

Health/Education tests and

measures

Page 6: Designing Indicators

INDICATORSWhat makes a good measure?

Page 7: Designing Indicators
Page 8: Designing Indicators

The Main Challenge in Measurement: Getting Accuracy and Precision

More accurate M

ore

pre

cis

e

Page 9: Designing Indicators

Terms “Biased” and “Unbiased” Used to Describe Accuracy

More accurate

“Biased” “Unbiased”On average, we get the wrong answer

On average, we get the right answer

Page 10: Designing Indicators

Terms “Noisy” and “Precise” Used to Describe Precision

M

ore

p

recis

e“Noisy”

“Precise”

Random error causes answer to bounce around

Measures of the same thing cluster together

Page 11: Designing Indicators

Choices in Real Measurement Often Harder

More accurate M

ore

p

recis

e“Noisy” but

“Unbiased”

“Precise” but “Biased”

Random error causes answer to bounce around

Measures of the same thing cluster together

Page 12: Designing Indicators

The Main Challenge in Measurement: Getting Accuracy and Precision

More accurate M

ore

p

recis

e

Page 13: Designing Indicators

Accuracy

In theory:

• How well does the indicator map to the outcome? (e.g. intelligence IQ tests)

In practice: Are you getting unbiased answers?

• Social desirability bias (response bias)

• Anchoring bias (Strack and Mussweiler, 1997)

Did Mahatma Gandhi die before or after age 9?

Did Mahatma Gandhi die before or after age 140?

• Framing effect

Given that violence against women is a problem, should we

impose nighttime curfews?

Page 14: Designing Indicators

Precision and Random Error

In theory: The measure is consistent, precise, but not necessarily valid In practice:

• Length, fatigue

• “How much did you spend on broccoli yesterday?” (as a measure of annual broccoli spending)

• Ambiguous wording (definitions, relationships, recall period, units of question)

Eg. Definition of terms – ‘household’, ‘income’

• Recall period/units of question

• Type of answer -Open/Closed

• Choice of options for closed questions

Likert (i.e. Strongly disagree, disagree, neither agree nor disagree, . . .)

Rankings

• Surveyor training/quality

Page 15: Designing Indicators

MEASUREMENT Challenges of Measurement

Page 16: Designing Indicators

The Basics

Data that should be easy?

• E.g. Age, # of rooms in house, # in HH What is the survey question identifying?

• E.g. Are HH members people who are related to the household head? People who eat in the household? People who sleep in the household?

Pre-test questions in local languages

Page 17: Designing Indicators

The Basics: Units of Observation

Choosing Modules: Units of Observation

Often this is simple: For example, sex and age are clearly attributes of individuals. Roofing material is attribute of the dwelling.

Not always obvious: To collect information on credit, one could ask household’s All current outstanding loans. All loans taken and repaid in the last one year. All “borrowing events” (all the times a household tried to

borrow, whether successfully or not).

Choice is determined by expected analytical use and reliability of information

Page 18: Designing Indicators

The Basics: Deciding Who to Ask

“Target respondent”: should be most informed person for each module. Respondents for each module can vary.

For example: to measure use of Teaching Learning Materials, should we survey the headmaster? Teachers? SMC? Parents? Students?

Choice of modules decides target respondent, and target respondent shapes the design of questions.

Page 19: Designing Indicators

What is hard to measure in a survey?

(1) Things people do not know very well

(2) Things people do not want to talk about

(3) Abstract concepts

(4) Things that are not (always) directly observable

(5) Things that are best directly observed

Page 20: Designing Indicators

How much tea did you consume last month?

A. <2 liters

B. 2-5 liters

C. 6-10 liters

D. >11 liters

Page 21: Designing Indicators

1. Things people do not know very well

What: Anything to estimate, particularly across time. Prone to recall error and poor estimation

• Examples: distance to health center, profit, consumption, income, plot size

Strategies:

• Consistency checks – How much did you spend in the last week on x? How much did you spend in the last 4 weeks on x?

• Multiple measurements of same indicator – How many minutes does it take to walk to the health center? How many kilometers away is the health center?

Page 22: Designing Indicators

How many cups of tea did you consume yesterday?

A. 0

B. 1-3

C. 4-6

D. >6

Page 23: Designing Indicators

What is Hard to Measure?

(1) Things people do not know very well

(2) Things people do not want to talk about

(3) Abstract concepts

(4) Things that are not (always) directly observable

(5) Things that are best directly observed

Page 24: Designing Indicators

How frequently do you yell at your partner?

A. Daily

B. Several times per week

C. Once per week

D. Once per month

E. Never

Page 25: Designing Indicators

2. Things people don’t want to talk about

What: Anything socially “risky” or something painful

• Examples: sexual activity, alcohol and drug use, domestic violence, conduct during wartime, mental health

Strategies:

• Don’t start with the hard stuff!

• Consider asking questions in third person

• Always ensure comfort and privacy of respondent

• Think of innovative techniques – vignettes, list randomization

Page 26: Designing Indicators

How frequently does your partner yell at you?

A. Daily

B. Several times per week

C. Once per week

D. Once per month

E. Never

Page 27: Designing Indicators

What is Hard to Measure?

(1) Things people do not know very well

(2) Things people do not want to talk about

(3) Abstract concepts

(4) Things that are not (always) directly observable

(5) Things that are best directly observed

27

Page 28: Designing Indicators

“I feel more empowered now than last year”

A. Strongly disagree

B. Disagree

C. Neither agree nor disagree

D. Agree

E. Strongly agree

Page 29: Designing Indicators

3. Abstract concepts

What: Potentially the most challenging and interesting type of difficult-to-measure indicators• Examples: empowerment, bargaining power, social cohesion,

risk aversion

Strategies:• Three key steps when measuring “abstract concepts”

• Define what you mean by your abstract concept• Choose the outcome that you want to serve as the measurement

of your concept • Design a good question to measure that outcome

Often choice between choosing a self-reported measure and a behavioral measure – both can add value!

Page 30: Designing Indicators

What is Hard to Measure?

(1) Things people do not know very well

(2) Things people do not want to talk about

(3) Abstract concepts

(4) Things that are not (always) directly observable

(5) Things that are best directly observed

Page 31: Designing Indicators

4. Things that aren’t Directly Observable

What: You may want to measure outcomes that you can’t ask directly about or directly observe• Examples: corruption, fraud, discrimination

Strategies:• Audit studies, e.g. CVs and racial discrimination

• Multiple sources of data, e.g. inputs of funds vs. outputs received by recipients, pollution reports by different parties

• Don’t worry – there have already been lots of clever people before you – so do literature reviews!

Page 32: Designing Indicators

5. Things that are Best Directly Observed

What: Behavioral preferences, anything that is more believable when done than said

Strategies:• Develop detailed protocols

• Ensure data collection of behavioral measures done under the same circumstances for all individuals

Page 33: Designing Indicators

DATA COLLECTION

Page 34: Designing Indicators

Use of Data

Reporting

• On Inputs and Outputs (Achievement of physical and financial targets)

Monitoring

• Of Processes and Implementation (Doing things right)

Evaluation

• Of Outcomes and Impact (Doing the right thing)

Management and Decision Making

• Using relevant and timely information for decision making (reporting and monitoring for mid term correction; evaluation for planning and scale up)

ALL OF THE ABOVE DEPEND ON THE AVAILABILITY OF RELIABLE, ACCURATE AND TIMELY DATA

Page 35: Designing Indicators

Problems in Data Collection

Data reliability (will we get the same data, when collected again?)

Data validity (Are we measuring what we say we are measuring?)

Data integrity (Is the data free of manipulation?)

Data accuracy/precision (Is the data measuring the “indicator” accurately?)

Data timeliness (Are we getting the data in time?)

Data security/confidentiality (Loss of data / loss of privacy)

Page 36: Designing Indicators

Reliability of Data Collection

The process of collecting “good” data requires a lot of

efforts and thought

Need to make sure that the data collected is precise and

accurate.

avoid false or misleading conclusions

The survey process:

• Design of questionnaire Survey printed on

paper/electronic filled in by enumerator interviewing

the respondent data entry electronic dataset

Where can this go wrong?

Page 37: Designing Indicators

Reliability of Survey Data

Start with a pilot

Paper vs. electronic survey

Surveyors and supervision

Following up the respondents

Problems with respondents

Neutrality

Page 38: Designing Indicators

PILOTINGQuestionnaire is ready – so what’s next?

Page 39: Designing Indicators

Importance of Piloting

Finding the best way to procure required information

• choice of respondent

• type and wording of questions

• order of sections

Piloting and fine-tuning different response options and

components

Understanding of time taken, respondent fatigue, and

other constraints

Page 40: Designing Indicators

Steps in Piloting

ALWAYS allow time for piloting and back-and-forth between team on the field and the researchers

Two phases of piloting

Phase 1: Early stages of questionnaire development Understand the purpose of the questionnaire test and develop new questions adapt questions to context build options and skips Re-work, share and re-test Build familiarity, adapt local terms, get a sense of time

Page 41: Designing Indicators

Steps in Piloting

Phase 2: Field testing just before surveying Final touches to translation questions and instructions Keep it as close to final survey as possible.

Page 42: Designing Indicators

Things to Look for During the Pilot

Comprehension of questions

Ordering of questions - priming

Variation in responses

Missing answers

More questions for clarifications? Cut questions? consistency checks?

Is the choice of respondent appropriate?

Respondent fatigue or discomfort

Need to add or correct filters? Need to add clear surveyor instructions?

Is the format (phone or paper) user-friendly? Does it need to be

improved?

Page 43: Designing Indicators

Discuss Potentially Difficult Questions with the Respondent

Example 1: Simplify/clarify questions

Do you use Student Evaluation Sheets in your school?

• Yes

• No

• Don’t know/Not sure

• No response

They might not know it by this name (show them a sample)

You may need to break it up into several questions to get at what you want

• Do you have them?

• Have you been trained on how to use them?

• Do you use them?

Page 44: Designing Indicators

Discuss Potentially Difficult Questions with the Respondent

Example 2 : Ordering questions and priming Yesterday, how much time did you spend cooking,

cleaning, playing with your child, teaching/doing homework with your child?

Do you think its important for mothers to play with children?

Do you think mothers or fathers should be more responsible for a child’s education?

If Questions 2 and 3 had come before 1, there could’ve been a possible

bias, order and wording of questions is important

Page 45: Designing Indicators

Importance of Language and Translation

The local language is probably not English, which makes things

tricky as to the wording of certain questions

• But people may be familiar with “official” words in

English rather than the local language

Translate

• Ensures that every surveyor knows the exact wording of

the questions, instead of having to translate on the fly

Back-translate

• Helps clarify when local-language words are used that

don’t have the same meaning as the original English

Page 46: Designing Indicators

Documentation and Feedback

Notes – time, difficulties, required or suggested changes

Meetings to share inputs

Draft document

Keep different versions of the questionnaire