client outcome assessmentstheliftproject.org/wp-content/uploads/2017/04/art... · typical...

75
Client Outcome Assessments: Supplementing Routine Project Data to Better Understand Outcomes of Interest PRACTITIONER GUIDE

Upload: others

Post on 26-Sep-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

Client Outcome Assessments:

Supplementing Routine Project Data to Better Understand

Outcomes of Interest

PRACTITIONER GUIDE

Page 2: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

1 Client Outcome Assessments // Practitioner Guide

contents

acronyms ................................................................................................................................................................................................................................................................... 2

acknowledgements ............................................................................................................................................................................................................................................... 3

foreword .................................................................................................................................................................................................................................................................... 3

contact ....................................................................................................................................................................................................................................................................... 4

how to use this guide ......................................................................................................................................................................................................................................... 5

introduction ............................................................................................................................................................................................................................................................. 6

how is this guide intended to be used? ..................................................................................................................................................................................................... 7

how is this guide set up? ................................................................................................................................................................................................................................... 8

component 1: developing the research question .................................................................................................................................................................................... 9

component 2: assessment design ................................................................................................................................................................................................................ 12

component 3: ethical considerations .......................................................................................................................................................................................................... 21

component 4: preparation ............................................................................................................................................................................................................................... 23

component 5: logistics in the field .............................................................................................................................................................................................................. 34

component 6: analyzing the data ................................................................................................................................................................................................................ 40

annex: practitioner toolkit ................................................................................................................................................................................................................................ 47

Page 3: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

2 Client Outcome Assessments // Practitioner Guide

acronyms

ART Antiretroviral Therapy

CBO Community Based Organization

DRC Democratic Republic of the Congo

ES/L/FS Economic Strengthening, Livelihoods and Food Security

HHS Household Hunger Scale

HTS HIV Testing Services

IRB Institutional Review Board

LIFT Livelihoods and Food Security Technical Assistance II

M&E Monitoring and Evaluation

NGO Nongovernmental Organization

ODK Open Data Kit

OHA USAID Office of HIV/AIDS

OVC Orphans and Vulnerable Children

PEPFAR President’s Emergency Plan for AIDS Relief

PI Principle Investigator

PLHIV People Living with HIV

PPI Progress out of Poverty Index

QI Quality Improvement

RCTs Randomized Controlled Trials

RN Referral Network

TA Technical Assistance

UNAIDS United Nations Programme on HIV/AIDS

USAID United States Agency for International Development

USG United States Government

VFS Vulnerability and Food Security

Page 4: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

3 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

acknowledgements

This publication is made possible by the generous support of the American people through the United States Agency for International

Development (USAID) under Cooperative Agreement No. AID-OAA-LA-13-00006. The contents are the responsibility of FHI 360 and do not

necessarily reflect the views of USAID or the United States Government.

This practitioner guide was prepared by Carlo Abuyuan (Mickey Leland International Hunger Fellow), Mandy Swann (Technical Advisor), Claire

Gillum (Technical Officer), Zach Andersson (Monitoring & Evaluation Specialist), Sonja Horne (Knowledge Management Officer) and Clinton

Sears (Project Director).

foreword

Clients affected by HIV and AIDS require a holistic system of support that integrates clinical healthcare with other services that mitigate the

impact of HIV and address structural barriers to care. A formalized network of clinical and community-based service providers is crucial for

increasing access to a range of HIV, nutrition, and economic strengthening, livelihoods and food security (ES/L/FS) services for this population,

which can increase their long-term retention in care and overall wellbeing. The Livelihoods and Food Security Technical Assistance II (LIFT)

project supports the establishment and effectiveness of such multi-sectoral referral networks.

This guide describes the implementation of lean assessment0F

1 methods used under LIFT to understand client- and household-level outcomes

associated with participation in the referral network, and provides relevant tools to adapt for use in other projects conducting similar

assessments. The guide is intended for implementers and operations researchers who seek guidance on layering lean data collection methods

onto ongoing programs. The tools included as annexes to this guide were developed for use under the LIFT project and can be modified as

1 Lean Data is a concept developed by Acumen that focuses on streamlined, low-cost methods for understanding social performance within

social enterprises: http://acumen.org/ideas/lean-data/. The concept has been adapted by LIFT and others to describe efficient measurement

methods that capitalize on existing activities and project monitoring to assess beneficiary outcomes, similar to implementation science.

Page 5: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

4 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

necessary. These tools have been tested and refined from experiences in Democratic Republic of the Congo (DRC), Malawi, Tanzania, Lesotho,

and Zambia. This guide will continue to be improved and refined, as needed.

contact

Please send any feedback to [email protected].

Page 6: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

5 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

how to use this guide

Lorem ipsum dolor sit amet, ut nulla, aptent eleifend dui justo laoreet, donec et odio litora non quis, libero

imperdiet et varius cursus sit fringilla, cras vitae turpis ut vel donec faucibus.

In praesent nam quam urna pretium pellentesque, amet risus dis non, neque tincidunt molestie placerat nisl etiam cursus, wisi penatibus ut

natoque orci maecenas, sem ac eu. Vestibulum sed justo nam mi nisl, proin viverra lacinia congue porta nunc risus, faucibus dui ut mauris.

Suspendisse fringilla fringilla nibh sem non. Lobortis lobortis pellentesque tincidunt. Wisi adipiscing libero cras elementum nonummy non,

velit scelerisque a, justo a non nulla fermentum wisi, ut non vitae, ante tellus quisque lacinia amet.

Select the tabs to

navigate through the

different components of

this interactive guide

Text in dark blue links to an

external page

Text in green links to another

element within this guide

Page 7: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

6 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

introduction

The Livelihoods and Food Security Technical Assistance II (LIFT) project was initiated by the

United States Agency for International Development (USAID) Office of HIV/AIDS (OHA) to

provide technical assistance and strategic support to United States government (USG)

agencies, their implementing partners, and other public, private and civil society partners to

improve the food and livelihood security of vulnerable households, with a particular focus

on people living with HIV and AIDS (PLHIV), orphans and vulnerable children (OVC) and their

caregivers. As with the LIFT project overall, the focus of this guide is multi-sectoral, with an

emphasis on understanding client outcomes resulting from referrals to economic

strengthening, livelihoods and food security (ES/L/FS), HIV/AIDS and nutrition interventions.

Bi-directional clinic-to-community referrals work to improve the health and wellbeing of

individuals and the communities in which they live. The President’s Emergency Plan for AIDS

Relief (PEPFAR) 3.0 commitment to support the United Nations Programme on HIV/AIDS’

(UNAIDS) 90-90-90 global goals1F

2 for sustainable control of the HIV epidemic increases the

importance of referral systems that enable the health sector to function as part of a larger

system of support for PLHIV. This integration has the potential to improve uptake of HIV

testing services (HTS), promote linkage to care for those who are positive, and support

adherence and retention in long-term HIV care and treatment by addressing common

economic and structural barriers to care.

2 By 2020, 90% of PLHIV will know their HIV status; 90% of all people with diagnosed HIV infection will receive sustained antiretroviral therapy; and 90% of

all people receiving antiretroviral therapy will have viral suppression.

client outcomes

assessment summary

• Lean assessments allow projects to

measure outcomes at the beneficiary

(client or household) level; LIFT has

used them to measure differences in

Anti-Retroviral Therapy (ART)

adherence, and changes in food

security and economic vulnerability

over time.

• They can be layered onto existing

projects to supplement the routine

data already being collected.

• Lean assessments are more aligned to

typical donor-funded project budgets

and timelines than experimental

studies or true impact evaluations.

Page 8: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

7 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

LIFT has worked with existing service providers to form referral networks (RNs) in 6 countries. Strengthening linkages between different kinds

of services benefits vulnerable populations, including those living with and affected by HIV and AIDS, who often have multiple interrelated

needs. While LIFT is not funded to conduct experimental studies or other highly rigorous research, the project has a mandate to build evidence

around the effectiveness of these referral networks in improving clients’ health, food security, and economic outcomes. To address this, once

RNs are well-established, LIFT employs lean assessment methods – efficient research approaches that measure correlations between referrals

and specific outcomes of interest, to collect more valid measures of client outcomes than would be possible through routine project reporting.

These assessments allow the project to determine whether receiving and acting on a referral to health, economic, food security or other social

services is associated with specific health, food security, or economic benefits for these clients or their households.

how is this guide intended to be used?

While rigorous research designs such as randomized controlled trials (RCTs) and quasi-experimental studies provide robust evidence linking

interventions to outcomes of interest, most donor-funded projects do not have the resources or mandate to implement them. At the same

time, these projects are expected to measurably demonstrate how their work has affected clients. This practitioner guide describes practical

alternatives to expensive and time-intensive research methods, which can be layered onto existing programming and routine data collection.

They will not allow attribution of outcomes to the project through causal inference, but instead aim to demonstrate an association between

participation in a program or service and client- or household-level changes. This is intended as a practical guide for program implementers

and operations researchers to conduct assessments in order to understand outcomes resulting from their project activities, and highlights

important considerations for conducting these assessments with vulnerable groups such as PLHIV, food insecure populations, and OVC

caregivers. The guide includes six components to lead users through critical steps and considerations in the design, planning and

implementation of these kinds of assessments: 1) developing the research question, 2) assessment design, 3) ethical considerations, 4)

preparation, 5) logistics in the field, and 6) data analysis. It also provides an array of tools which can be utilized and adapted, highlighting

unique considerations for these kinds of assessments in multi-sectoral programming.

LIFT developed this guide based on its experience conducting lean assessments to understand outcomes of referral network clients and their

households. The two assessments conducted by LIFT that will be referenced throughout this guide include:

Page 9: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

8 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

• ART Adherence and Retention Check (ART Check): This observational cohort assessment aimed to understand whether receiving

and acting on a referral from a health facility to a community-based supportive service such as ES/L/FS was associated with better

ART adherence among PLHIV. Clinical ART records of PLHIV referral clients, as well as of a comparison group, were reviewed at

multiple time points to determine if referral clients were more or less adherent, and more or less likely to be retained in care, over

time. This assessment was conducted in DRC, Lesotho, Malawi, Tanzania, and Zambia.

• Vulnerability and Food Security Assessment (VFS Assessment): This observational pre-post assessment without a comparison group

used the globally validated Household Hunger Scale (HHS), along with a poverty assessment tool modeled after the Progress out of

Poverty Index (PPI), to assess whether referral client households experienced changes in food security or economic vulnerability after

receiving and completing a referral. This assessment was conducted in Lesotho.

This guide is not intended to be a rigid instruction manual for lean assessment implementation; rather, it provides tools and guidance that

can, and should, be adapted to best fit each program and context. Using the guide as a reference, we hope that practitioners will develop

new ideas to inform the field of operations research for client outcomes.

how is this guide set up?

Component 1: provides guidance on developing a research question that is important to your program and its stakeholders, which will guide

the assessment.

Component 2: walks users through critical questions and considerations to inform the assessment design, including ways to maximize

resources and use research methods that can strengthen the rigor of the assessment.

Component 3: reviews important ethical considerations for these assessments.

Component 4: provides an explanation of how to prepare technically and operationally for lean assessments.

Component 5: draws heavily on LIFT’s practical experiences to discuss logistics in the field and best practices for implementation of the

assessment.

Component 6: discusses the process for analyzing data from the assessment and the creation of key deliverables.

Page 10: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

9 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

component 1: developing a research question

This section provides guidance on developing a research question that is important to your program and its stakeholders.

A research question is a specific and empirically testable question that your assessment will aim to answer. Defining a research question is an

important initial step, as it should guide both the design and implementation of your assessment. Often projects are tempted to begin an

assessment by identifying data points that are accessible or interesting, but without first defining how the data will be used by the project.

This is not recommended because it can cause confusion among stakeholders about the purpose of the assessment, and in extreme cases it

results in wasting resources collecting and analyzing data that are not useable as they are unable to answer a question that is relevant to the

project or its stakeholders. Thinking through a coherent research question guides the design of an assessment that will yield useful results,

and provides an important touchstone throughout the implementation of the assessment that can help keep the activity on track.

• In developing your research question it may be helpful to try answering the following questions:

• Is there a specific question your project needs or wants to answer?

• What would you like your project to be able to demonstrate?

• Are there aspects of your project that you are curious about that are not currently being measured?

• Are there ways your project can contribute to closing a gap in evidence related to the work you do?

For programmatic or operations research, which characterize lean assessments, the scope of your research question should be consistent with

the technical and geographic focus of your project, rather than something very broad. To guide the assessment design and implementation,

you should already be thinking about practical limitations such as time, cost, ethical considerations, required approvals, and expertise in both

technical content and data analysis.2F

3 These issues are covered in more detail under Component 2, but should be considered at this initial

stage as well. For example, if your research question focuses on an outcome that will take at least two years to assess, but your project only

3 Neuman (2006), Social Research Methods, Qualitative and Quantitative Approaches

Page 11: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

10 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

has one year left, you should reformulate the question to have a more appropriate scope. Below are some examples of research questions

that can be used to guide the development of a specific, testable question that is aligned to your project.

Table 1: Examples of Good and Bad Research Questions

Bad Research Questions Problem

Savings groups and food security Just a topic not a research question

Should referrals be required for all ART clients in Malawi? Not empirically testable

Has food security in DRC improved over the past 5 years? Too broad for programmatic/operations research

How does the project influence ART adherence? Too vague

Do referrals lead to high adherence among ART clients? Still not specific enough: Which clients are we interested in? What is the

threshold for “high adherence”?

Good Research Questions

Is participation in savings groups associated with changes in food security among HIV positive clients in Mkushi, Zambia?

Do ART clients in Iringa, Tanzania who got a referral through LIFT have better ART adherence than clients who did not get a referral?

In addition, projects should consider what the existing literature says about your research topic or question. This can be done by conducting

a literature review 3F

4 to understand:

4 Resources on conducting a literature review can be found via the following links: http://advice.writing.utoronto.ca/types-of-writing/literature-

review/; http://writingcenter.unc.edu/handouts/literature-reviews/

Page 12: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

11 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

• Whether there is any previous research on the topic you plan to study

• If so, what does it tell you about the program/intervention and outcome(s) you are interested in?

The literature review should focus on your research topic more broadly, rather than the narrow focus of your research question. For example,

your literature review could include research conducted within Sub-Saharan Africa that studied savings groups and measured a food security

outcome. Once you have an understanding of the existing literature, consider how this will this inform your own research question. For

example, if the literature shows conclusively that savings groups have a positive effect on food security in similar contexts, you might revise

your research question to focus on a different, or more specific outcome, or include aspects of your project context that make it distinct from

the studies in the review (i.e. there was a drought).

Finally, think about how you will use the answers to your research question. Programmatic research can simply add to the body of knowledge

in a particular area, but more often it should have a practical application – such as improving program activities, planning for expansion, or

building awareness of project successes or learning. The project should also consider the relevance of a research question to other stakeholders,

including donors, country and local partners, and beneficiaries. How might the assessment findings be useful to stakeholders, beyond what is

planned by the project? If you cannot clearly articulate how the answer to your research question will be useful, you should develop a different

research question.

Page 13: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

12 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

component 2: assessment design

This section walks users through critical questions and considerations

to inform the assessment design, including ways to maximize resources

and use research methods that can strengthen the rigor of the

assessment.

Design Considerations

There are a wide range of research designs that could be employed for

lean client outcomes assessments. This guide does not advocate one

design over another, but walks users through important considerations

that must be well-thought-out before the assessment begins. The final

design will ultimately depend on the project’s context, resources and

mandate but, within that, should aim to maximize the validity and

reliability of findings.

Start the design process by considering several key questions including:

4F

55F

66F

7

What Kinds of Data Could Help to Answer the Research Question? There may be a number of data points that could answer your question.

5 Neuman (2006), Social Research Methods, Qualitative and Quantitative Approaches

6 The United Kingdom’s Department for International Development, Assessing the Strength of Evidence (2014).

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/291982/HTN-strength-evidence-march2014.pdf

7 Association for Qualitative Research: http://www.aqr.org.uk/glossary

key definitions

• Internal validity: The degree to which errors internal to

the design of the study have been mitigated such that

there are no alternative explanations of the results (i.e.

the independent variable alone affects the dependent

variable).5

• External validity: The extent to which the findings of a

study are likely to be replicable across multiple contexts

– also referred to as generalizability.

• Measurement validity: The extent to which a specific

indicator is well suited to measuring a particular

outcome or construct.6

• Reliability: Broadly, this is how accurately the findings of

a study would be replicated in a second identical piece

of research. It also refers to the consistency of a

measurement tool when applied to different subjects or

by different researchers.7

Page 14: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

13 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Thinking about the example of ART adherence, you could collect biological measures such as CD4 counts or viral loads, review clinical records

to assess medication pick up, use electronic pill caps, ask clients directly about their adherence, or ask stakeholders working with your project

whether they have observed changes in client adherence.

Some of these would require primary data collection, such as through client surveys or biological tests, while for others you need to review

and capture data that have already been collected, such as medication pick up or biological data from existing medical records. Depending

on your research question, multiple sources or types of data may be preferable or even necessary – for instance medical records used to

assess adherence could be coupled with qualitative information from clients to understand why or how changes in adherence are occurring.

Prioritizing which Data Sources to Use

Once you have thought through the possible sources of data that could answer your research question, then the project must prioritize and

select from these options. Key considerations include:

• Which data are more or less valid? Continuing with the example above of ART adherence, it is possible to ask clients about their

own adherence or to review clinical records. While both measures can assess adherence, generally, clinical records are considered

more accurate and objective than self-reported data. Particularly for a topic like this where clients have a clear understanding that

adherence is good and non-adherence is bad, the potential to introduce social desirability bias – whereby a client provides answers

that conform to social norms or expectations – is high. In these cases, it is not uncommon for clients to over-report their adherence

to ART and avoid admitting to “socially undesirable” behavior of non-adherence. Similarly, for outcomes of food security or economic

security, social desirability bias could lead respondents to indicate the program has helped them in these areas even when it has not.

At the same time, self-reported information can be incredibly valuable, and using existing tools that have been validated for your

outcome, such as HHS and PPI can increase the accuracy of the data, compared to a non-validated questionnaire developed by the

project. If you are asking about less sensitive outcomes, such as HIV prevention knowledge or perceptions on educating girls, social

desirability bias may be less of a concern. However, many other factors can also influence client responses, such as the recall period

(how far back you are asking them to report on their experience), how questions are worded, or the perception that they might get

additional services if they answer a certain way. These factors can affect data validity and reliability, but some can be mitigated through

other aspects of the assessment design and planning, and are discussed throughout this guide.

o When it comes to clinical records, as noted above, this is a more objective measure than self-reported data, but is only valid

if data are accurately recorded by all clinics in the assessment, for all clients, at each visit. If clinical records are not well-kept

Page 15: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

14 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

at the facilities where you are planning the assessment, measurement validity is reduced. Within the clinical records there are

also data points that may be more or less valid at measuring adherence. For instance, behavioral data like appointment

attendance and picking up medications does not directly indicate whether the client is taking that medication. Biological

records such as CD4 count or viral load – if measured and recorded systematically – may provide a more accurate reflection

of adherence.

• Which data are more or less reliable? While validity deals with accuracy, reliability deals with consistency. If you measure multiple

subjects with the same characteristic (e.g. people who have not picked up their ART medication for two months), will your data provide

the same result for these subjects? LIFT used clinical records in its ART Check, though prior to implementation we found that some

sites defined non-adherence differently: some indicated a person was non-adherent after only one month of not picking up their

medications, while others defined non-adherence as missing three months of medication pick-ups. If we had not designed our tools

to account for these differences, we would have had major challenges with the reliability of our data, whereby people who had not

come for medication in two months would sometimes be counted as adherent and sometimes as non-adherent, depending on the

site. Self-reported data can be particularly subject to reliability issues, whereby interviewers might ask questions slightly differently

(especially with less structured questionnaires), and/or respondents might interpret questions differently depending on their

experience, both leading to inconsistencies in the data. Carefully designing questions/instruments, as well as data collection and

analysis procedures, can minimize risks to reliability from self-reported data. While standardized measurement tools such as scales

and biomedical tests tend to be more reliable, inconsistencies can occur if, for instance, multiple scales are used in the study but one

of them has a broken spring and is not calibrated correctly, or multiple labs process biological samples differently.

• What is feasible for the project to collect? After considering the validity and reliability of potential data sources is it important to

consider what is feasible for the project to collect. Factors such as time, relationships, ethics and many others come into play. These

topics will be covered in more detail throughout this guide, but at this point in the process you need to confirm the feasibility of

collecting the data points in which you are most interested. For example, you may feel clinical records are the most appropriate for

your assessment, but if your project has not had any relationship with the Ministry of Health or the proposed clinics to date, it may

be very difficult and time consuming to build trust and joint agreement on the activity and gain access to their records. Similarly,

you may be interested in using data that will only be available in 18 months and your project ends

before that time, making this data source unfeasible.

Page 16: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

15 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Depending on your research question, a mixed methods design, in which both quantitative and qualitative data are collected, might be the

most appropriate. LIFT often used qualitative data in lean assessments to help augment or contextualize the results from the quantitative

analyses. Including qualitative data collection can be useful if projects are seeking to understand the causal pathways by which an intervention

may, or may not, have contributed to the study outcomes. This can be achieved by collecting qualitative data from beneficiaries or other

stakeholders about perceived benefits or their opinions about a project or specific intervention. As with quantitative components, qualitative

assessment components should be carefully designed to minimize bias and ensure appropriate rigor.

Once the data source(s) is selected, the next step is to clearly and objectively define your outcome variable(s) based on the selected source.

For clinical records, you might define a client as “adherent” if they have not missed any of the last 6 monthly medication pick-ups. For client

interviews it could be defined based on a 30-day recall and establishing a cut-off point for the maximum number of missed doses. If using

existing records or data collection tools, it is important to review and align your definitions with how the information of interest is being

captured.

Prospective or Retrospective Data Collection

Once you have selected your data source and outcome variables, the project must determine whether to use prospective or retrospective

data collection methods, or a combination of both. Prospective data collection follows clients over time to assess changes in the outcome(s)

of interest after the start of the study. Conversely, retrospective methods use data from past records to look for changes that occurred prior

to the start of the study, and do not follow up with clients into the future. Records from public service providers such as health facilities,

schools, and other social services, are often the most readily available options, though data from ongoing or past programs implemented by

Nongovernmental Organization (NGOs) or Community Based Organizations (CBOs) may also be useful. Retrospective assessments can be

much more expedient, as projects will not need to wait for outcomes to occur as they would in prospective studies, and may be more efficient

for projects that have been implementing for a while. On the other hand, retrospective methods can be problematic for project-based

assessments because you are limited by what data are already available in previously collected records, and often there are not any records

available that are relevant to your project and research question.

LIFT’s ART Check used a combination of both prospective and retrospective methods, whereby past clinical records were reviewed for up to

18 months prior to the start of the study, then those clients were also followed over time to assess changes for up to 12 months after the

start of the study, to understand adherence patterns before and after their referral. The VFS Assessment only used prospective data collection,

asking a series of questions of all clients at enrollment into the referral system, and then following up 12 months later to assess changes.

Page 17: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

16 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Cross-sectional or Longitudinal

Cross-sectional studies collect data only from a single time point. For example, a program providing OVC services might assess school

performance of participants at the end of the project. Longitudinal assessments, on the other hand, collect data at multiple points in time –

often over several months or years. This could include collecting data on participant school performance every 6 months over the life of the

intervention (e.g. 3-5 years). For a retrospective study, this would look at past school performance among program participants at similar

intervals over the same time period, but after the end of the intervention. Cross-sectional studies provide a snapshot which may or may not

be representative of school performance throughout the intervention period (e.g. maybe there was a teacher strike and school performance

dropped just before the assessment). Longitudinal studies can highlight trends throughout the implementation period and provide a more

complete picture of school performance over time.

At this point in the process you should determine how many data collection points you are planning and with what frequency. As with most

of the considerations in this guide, the number of data points and frequency of data collection will depend on several logistical factors, such

as budget and time. LIFT started the ART Check with the intention of only completing one round of data collection. However, given the

interest in this area from our funder, we were able to quickly allocate more resources and expanded the assessment to follow the same cohort

of clients at regular intervals over time. There may also be other more technical considerations in determining the timing and frequency of

data collection. For LIFT’s VFS Assessment, it was important to take into account the seasonality of food security. If the initial assessment was

done in the lean season and a follow up was done post-harvest, the results would likely be quite different but might have very little to do

with the project. It was important to repeat the HHS at the same time of year as the original assessment in order to minimize bias in the

results. Thinking about how your outcome(s) of interest might change over time, independent of your intervention, can help you to determine

when and how often to collect data.

Comparison Groups

Another key assessment design factor is whether a comparison group will be included in the study. Lean assessments generally will not have

an experimental or quasi-experimental design, whereby participants are assigned to the intervention or control study arms. Nonetheless, the

use of a comparison group improves the rigor of the design. For example, in LIFT’s ART Check, a comparison group of non-referral ART clients

was included in the study to determine whether the probability of adherence among referral clients was greater than for non-referral clients.

Without a comparison group, it is very hard to know whether any changes observed in the outcomes are related to the project. If we saw a

steady increase in adherence among referral clients, this could have been the result of something outside our project, such as an adherence

Page 18: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

17 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

campaign by another project or the availability of free transport to the facility, for example. With the inclusion of a comparison group from

the same facilities, we can better determine whether:

• the improvement in adherence is specific to referral clients, which indicates it may be linked to the project; or

• the improvement applies to all clients at the facility, which indicates it is likely unrelated to the project.

Because participants were not randomized into referral and non-referral arms, there may be fundamental differences between the groups (e.g.

clients completing a referral are more motivated than the general ART client population and therefore also more likely to stay on ART),

therefore we cannot claim any kind of causal relationship, but it is a stronger design to include data from a comparison group.

You will need to collect the same data, in the same way, for your comparison group as you are planning for your intervention group.

Comparison groups should be as similar to your intervention group as possible. For example, it is often less desirable to have your comparison

group come from different communities or facilities than your intervention group, as they may have been exposed to different programs and

services that affect your outcomes of interest. You also want to avoid selecting a comparison group among people who would not be eligible

for your project’s intervention, such as comparing OVC to non-OVC.

If you determine that there is an appropriate comparison group for your study, and you have the ability to collect data appropriately for that

group, then you will need to outline the specific parameters for the comparison group. Matching comparison to intervention clients on key

variables can help to make sure the groups are as similar as possible. For the ART Check, LIFT primarily matched comparison clients on a one-

to-one basis according to age and sex. This helped build comparable groups – avoiding a situation where, for example, the majority of referral

clients were very young and the majority of controls much older, which could certainly affect adherence. Importantly, LIFT also matched

comparison clients based on ART initiation date and adherence at the time the referral client got their referral. This supports baseline

equivalence in terms of the outcome of interest, whereby the referral and comparison clients had the same opportunity to adhere or default

over time. Projects could choose to match intervention and comparison clients based on other variables that influence both participation in

the program and the outcome(s) of interest, though selection of these variables may require further research.

Sampling

In order to develop your assessment sample, eligibility criteria must be clearly defined. If assessing a food security program, projects may be

interested in the entire project population, or only a specific subset (e.g. women household heads, aged 21 to 45). Once eligibility for the

assessment is defined, determine the sample size required. LIFT has not typically designed its assessments to yield statistically significant

results, therefore sample sizes have been determined based on the overall population of eligible clients, a general sense of the workload

Page 19: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

18 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

involved in collecting the data, and a determination of the number of observations that would be compelling to our project stakeholders;

opting for the largest sample sizes possible to increase precision of findings. If your project needs to demonstrate statistically significant

results, it should work with a biostatistician to determine the sample size required for statistical power, which depends on numerous factors.

Selection of assessment participants can be done in several ways:

• Convenience sampling: This is a non-representative sampling approach whereby researchers or data collectors select the most

convenient people that meet the eligibility criteria, until the sample size has been fulfilled. This approach is less desirable because

findings cannot be generalized to the rest of the clients that participated in your project/service.

• Purposive sampling: Also called selective sampling, this is another non-representative sampling approach that takes into account the

researcher’s perspective to select cases that may be particularly informative, or to approximate a representativeness on particular

variables of interest (e.g. 30% urban and 70% rural). It can be effective in contexts where complete lists of all eligible participants

cannot be generated (though this is less common for project-based assessments).

• Randomized sampling: This approach provides a representative sample, and findings can be generalized to the population from

which the sample was drawn. It is done by generating a randomized list of all the clients that meet that criteria and developing an

algorithm to determine who is included in the assessment (e.g. every nth client). Random sampling can be stratified by characteristics

that may be related to the outcome of interest, such as gender and age.

If a comparison group is included in the assessment design, projects also need to determine how these participants will be sampled. In the

LIFT ART Check, in some sites LIFT used randomized sampling to identify referral clients from each facility, then used a convenience sampling

approach to select the comparison group, but required that comparisons be matched one-to-one on several variables to the referral client.

Page 20: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

19 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Table 2: Design Consideration Examples from LIFT

ART Adherence and Retention Check Vulnerability and Food Security Assessment

Research question Do ART clients in LIFT sites who completed a referral

to supportive community services have better ART

adherence than clients who did not get a referral?

Is completion of a clinic-to-community referral

associated with changes in household food security

and/or vulnerability among clients in Mohale’s Hoek and

Thaba-Tseka, Lesotho?

Considerations

What data could answer

the research question?

Biomedical testing, clinical records, client

interviews/surveys on adherence

Referral network records containing client surveys, CBO

program records, follow-up surveys

Prioritizing which data

sources to use?

Biomedical records were beyond the scope and

mandate of LIFT. Clinical records were reasonably

accurate and complete for the data points of interest

and LIFT had an existing relationship with MOH

which made this data source feasible. Client

interviews were also possible but would have had

lower measurement validity than clinical records.

Neither clinical records nor the records of CBO programs

in the districts of interest contained data on household

food security or vulnerability. Client surveys using

relevant measures were included as part of the referral

process. Therefore, using this existing data and

repeating these surveys one year after the referral was

feasible.

Prospective or

retrospective?

Both; looked at adherence and retention history

prior to referral and followed participants over time

for up to 12 months after their referral

Prospective only; no retrospective data were available

for the outcomes of interest

Page 21: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

20 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Longitudinal or cross

sectional?

Longitudinal to assess historical patterns for each

client and changes over time after receiving a

referral

Longitudinal to assess differences between the time

when client was referred and one year later

Comparison groups? Yes, each referral client was compare to a non-

referral client matched on key demographics

No, referral clients were compared to themselves at two

different time points

Sampling? Convenience sample of ART clients who had

completed a referral

Convenience sample of clients who completed their

referral and could be reached and consented to

complete a follow-up survey

lessons learned: assessment design

• Make sure there is clarity among stakeholders on your research questions and that you are consistently referring back to the

question as you work through all aspects of the assessment design and planning. As you make critical decisions and trade-offs,

make these will not prevent you from answering your research question at the end of the assessment.

• Understand the locally applicable definitions of your outcomes of interest before you begin data collection. Be sure to triangulate

this by asking multiple stakeholders, and try to get a written copy of any government policies or other authoritative documents.

Ensure the outcome definitions you are using for the purposes of the assessment are clear to all relevant stakeholders.

• Decide beforehand how results will be reported and used. If you are expected to report statistically significant findings, it is

important to consult with a statistician in the design phase.

Page 22: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

21 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

component 3: ethical considerations

This section reviews important ethical considerations for lean client outcome assessments.

Protection of Human Subjects

While client outcome assessments may seem like a natural continuation of project activities,

ethical considerations are paramount throughout the design and implementation to protect

the rights and privacy of assessment participants. In well-designed client outcomes

assessments, risks to participants tend to be minimal. However, this is highly dependent on

your client population and research question. If you are working with children or other

vulnerable or marginalized groups, such as sex workers, the risks are greater. Similarly, the

more sensitive the outcomes of interest are, the greater the risks to participants. At the

design stage, projects must consider the following: 7F

8

• Collecting and managing sensitive information: Unlike routine project data, depending on the research question, the data being

collected may be moderately or even highly sensitive. Projects should design assessments to minimize risks to clients, such as not

including identifying information on data collection forms, and always following local and international ethical approval requirements.

Data collectors must be trained how to ask about sensitive information and must understand research ethics specific to your

assessment. Data collection must take place in locations that are appropriate for sharing of sensitive information. Projects need to

develop and follow a protocol to keep client data safe and secure.

• Obtaining informed consent: Assessment tools should be designed with detailed and appropriately-communicated consent

statements outlining any potential risks and benefits of participation, and proper informed consent must be obtained and documented

from all study participants (i.e. individuals from whom personal data will be collected as part of the study).

8 National Institutes of Health: https://humansubjects.nih.gov/sites/hs/public_files/hs_infographic.pdf

human subjects research Human subjects research is defined as

“research involving a living individual

about whom an investigator obtains either

data through interaction or identifiable,

private information”.8

Non-human subjects research is any

study that does not meet the criteria of

human subject research.

Page 23: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

22 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

• Data sharing: Projects should be clear on how and with whom data will be shared, and this information should be clearly

communicated to participants as part of the informed consent process. In determining how data will be shared, projects should

consider any funder requirements related to open data, as well as applicable national, local or organizational policies. For example,

some countries prohibit or discourage sharing of data collected about their citizens and programs.

• Comparison groups: If your assessment includes a comparison group, it is important to frame your questions and collect your data

in such a way that does not raise the expectation that they will obtain a service or benefit. While some of these clients may end up

receiving the project’s intervention at a later time, you do not want to create an expectation of this, which may result in disappointment

or increased vulnerability if that expectation is not met.

Ethics Review Board Approval

LIFT recommends that all assessment protocols involving human subjects be reviewed by a research ethics review board or Institutional Review

Board (IRB) to ensure that the principles of human subjects research are upheld at all stages of the assessment. An IRB’s goal is to protect

human subjects from physical, psychological, or other harm as a result of research activities, through careful review and approval of research

plans and protocols, and monitoring of the assessment implementation and results. Ethics review boards often exist within the institution

conducting an assessment, as well as within the country where research is being conducted. IRBs all have similar goals, though the structure

and process of the review often differs based on the context and the institution. In some instances, review and approval may be needed from

multiple boards. Projects should be aware that ethics reviews, while very important, can be time-consuming, particularly if multiple levels of

review (e.g. home institution and in-country) are required.

Page 24: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

23 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

component 4: preparation

This section provides an explanation of how to prepare technically and operationally for lean assessments.

Dedicating time to properly prepare for lean assessments can increase the efficiency and quality of data collection, as well as help to avoid

common challenges, such as going over budget or missing deadlines. Many of the decisions made during the assessment design phase

accounted for logistical constraints such as budget, project staff expertise, and time. However, as you begin preparing for data collection and

analysis, these considerations will become even more important. At this stage, it may be helpful to identify a member of the project team to

serve as the assessment manager, separate from the Principal Investigator (PI), who will lead, and is ultimately accountable for, the study. This

segregation of responsibilities can be useful in ensuring that there is a designated individual responsible for monitoring the budget, activity

progress, and coordinating between data collectors in the field and any remotely-based project staff.

The primary steps to prepare operationally for lean assessments include scheduling and budgeting, coordinating with and obtaining necessary

approvals from all necessary stakeholders (e.g. funders, local government, project partners, community leaders, etc.), obtaining IRB approval

(as discussed above), and developing or adapting assessment tools.

Scheduling and Budgeting

The level of complexity of your lean assessment, and the type of data you are planning to collect, will directly impact how detailed your

schedule and budget will need to be. However, even the most basic assessments should have a schedule and budget that capture both the

direct work needed to complete the assessment, such as data collection and analysis, as well as more indirect work. This indirect work may

include tasks such as writing scopes of work for data collectors, procuring tablets or mobile devices for electronic data collection, and holding

assessment team check-ins.

When planning your schedule, it can be helpful to follow these steps:

• Define the activities: Make a comprehensive list of all activities that will need to take place to complete the assessment. Think not

just about the major steps in the process but intermediate steps such as recruiting data collectors, printing paper-based tools or data

collection manuals, and different steps in the analysis process (see component 6).

Page 25: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

24 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

• Sequence the activities: Consider the order in which activities will need to take place, including which activities can take place

concurrently and which will need to be completed before another activity may begin.

• Estimate resources and activity duration: Often projects will estimate the length of time it will take to complete an activity without

carefully considering potential resource constraints. For example, will data collection take place at the same time as a vaccination

campaign when health facility staff may be less available? Or will the availability of a project vehicle restrict the days on which data

collection can occur? Estimating the resources – time, money, materials, and personnel – that will be available for the assessment will

help you to make more accurate estimates of how long each activity will take.

• Develop the schedule: With the inputs from the previous steps, you will now be able to create a comprehensive schedule for your

assessment. This schedule may take the form of a Gantt chart, or another scheduling tool with which the project is familiar.

A budget is an important planning tool, which helps to align the scope of the activity with available funding. Your schedule should be used

to inform your budget so that you properly account for all of the activities that need to take place during the assessment, and to ensure that

the budget accurately captures the resources and time you have allocated. In the process of developing your budget, you may find that some

assumptions made during the scheduling process are no longer valid, for example your budget cannot afford to have as many data collectors

as you had anticipated. It is for this reason that budgeting and scheduling should be considered integrated processes, whereby changes to

one document should be reflected in the other. Ensuring that the interrelated issues of budget, time and resources are addressed during the

preparation phase will reduce the likelihood that your assessment will experience unexpected budget or time overruns.

When creating a budget for an assessment activity, you will likely have to consult with a range of people to get accurate cost information,

particularly working with field-based staff to understand all in-country costs. Key budget items to be considered include:

• Staff time: Consider who will work on assessment design, training, data collection, and data analysis, as well as those who will provide

management and operational support. The budget should include the salaries and expected level of effort for all individuals involved

in these aspects of the assessment. Some questions to consider are:

o What role will project staff have in planning, data collection, analysis, and/or administrative/logistical support?

o Will the project need to hire data collectors or other professional consultants? If so, how many, for how long, and what is the

standard rate?

Page 26: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

25 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

o Will non-project personnel, such as project partners or health facility staff/volunteers help with data collection? If so, will they

need to be paid for this work? How, and at what rate, will they be compensated?

• Travel: Include any anticipated travel needed throughout the assessment; this might range from international trips for HQ-based

project staff, to local travel to field sites for data collectors. Travel costs might include air fare, accommodation, vehicle costs, fuel, and

per diem for traveling staff.

• Equipment and materials: Consider what tools or technology are needed for data collection. If you have decided to use electronic

data collection, consider the cost of equipment (mobile phones or tablets) as well as the relevant applications or programs that you

may need to purchase. If you will be using paper-based tools, estimate how much printing will cost. You should also include the cost

of any data analysis software that you will use to analyze your data.

• Training: If training will be required for data collectors, consider what resources are needed for these trainings. This might include

renting a training venue, printing training materials, and stationery. Any trainings that will require travel for project staff, data collectors,

or consultants should also be reflected in the travel budget.

• Ethics review: Investigate the costs of international and in-country IRB/ethics reviews. These can vary substantially by country or site.

• Participant compensation: If your assessment will include collecting data from clients or stakeholders in-person, you should consider

whether these participants will be compensated in any way, such as with a transportation stipend. If so, these costs should also be

reflected in your budget. When considering whether participants will be compensated, be careful to consider the ethical implications

of paying participants for their participation in research, and what would constitute appropriate compensation.

Identifying and Engaging Relevant Stakeholders

Some stakeholders, such as government ministries or prominent implementing partners, may not have a direct role in the assessment; however,

it is both important and beneficial that they be consulted and made aware of the activity. In some cases – particularly with government

ministries – their authorization may be required to conduct the assessment. Therefore, engaging with them as early as possible in the

assessment process is valuable, and their involvement can help strengthen the assessment design. Additionally, early engagement can enhance

their overall relationship with the project and encourage their acceptance and use of assessment findings. Projects are also encouraged to

commit to a data sharing and utilization plan with key stakeholders to ensure that the community in which the assessment is being conducted

can benefit from the research and is afforded some ownership over the data collected from them.

Page 27: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

26 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

In addition to stakeholders that play an indirect role in the assessments, it is often necessary to work with partners more directly to obtain

the data needed to answer your research question. When planning the assessment design, the project should already have thought through

potential data sources and considered how they will obtain the necessary data, and with which partners they might need to work. For example,

for the ART Check, LIFT was not able to access confidential client medical records directly, so the project worked with the health facility staff

to collect de-identified information from client medical records. For the VFS Assessment, LIFT collected primary data but worked through two

community partners who were involved in the referral network and had clients’ contact information to invite them for an interview. If this kind

of partnering is necessary, projects should consult with stakeholders to determine which existing structures or partners have, or could help to

obtain, the relevant data.

When trying to determine the optimal data collection partner, assessing the quality of relationships, as well as the function and capacity of

any potential partners is important. Once a project identifies possible partners to assist in data collection, it should:

• Assess the project’s existing relationship with the partner(s). Stronger established relationships will support collaboration,

communication and accountability throughout the data collection process. If the project does not have a strong existing relationship

implementation tip: authorizations

While explicit government authorization may not have been a requirement for your routine project monitoring and evaluation (M&E),

for lean assessments, projects often have to request permission or authorization from government ministries, local institutions or other

authorities to collect data. Even if authorization was required for your project’s M&E, the scope of the assessment may go beyond the

approval obtained for routine project activities. Identifying the authorizations required and beginning the process of obtaining those

authorizations should start as early as possible in the preparation phase. Failing to get proper authorization for data collection,

particularly when working with vulnerable populations, can severely damage a project’s reputation and relationship with the community

and other stakeholders, as well as potentially violate local policies or laws. High-level approvals may not always be disseminated in a

timely way to the local/site level, so the project should be prepared to show evidence of the approvals obtained, and may encounter

delays as sites may need to verify that the planned activity has been approved by the appropriate authorities before agreeing to

participate.

Page 28: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

27 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

with a potential partner, you may need to dedicate additional time to educating the partner on the project, the purpose of the

assessment, and laying out clear expectations of roles and responsibilities for data collection.

• Understand their function as it relates to the data needed for the assessment. Does this partner own the data you are interested

in collecting? Will they need to work with other stakeholders to collect what is needed for the assessment?

• Assess their capacity and availability to support the assessment. Do they have the required resources? Do they have staff with the

necessary skills and expertise to conduct data collection? If the project partners with them, is the data collection likely to be completed

accurately and on time?

Considering these factors before data collection begins will help projects identify the most appropriate partner(s), and understand trade-offs

which may have to be made when selecting a partner. For example, in one country where LIFT conducted an ART Check, an implementing

partner employs data clerks to manage health facility data. Although the data clerks are externally supervised by the implementing partner,

they also work directly within the Ministry of Health system. Therefore, while the data clerks had the necessary access and capacity to complete

the data collection, their complex management structure required LIFT to budget additional time for planning, getting approval for, and

consensus on, how the data clerks would be involved in data collection. If LIFT had instead worked directly with the health facility staff, with

whom the project already had strong existing relationships, this would have reduced the delays and complications.

Once the data collection partner is identified, it is important to bring the appropriate people from that organization into the assessment

planning process to ensure consistent understanding of needs, expectations, roles and responsibilities. These high-level discussions should

include:

• Explaining the purpose of the assessment, highlighting how it is relevant to that partner’s goals and objectives. For example, in one

LIFT country, the implementing partner agreed to participate in the ART Check because their mandate is to improve access to and

retention in HIV-related care, and the assessment results could help to inform their own program activities.

• Jointly determining key roles and responsibilities, such as who will be responsible for training, collecting data, supervising data

collection, and providing quality control. Particularly important decisions include:

o Designating a primary point of contact within the partner organization – ideally someone in a position of seniority with

decision-making authority – to ensure necessary approvals are granted and to minimize the risk of communication breakdowns.

Page 29: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

28 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

o Deciding which of the partner’s staff will collect the data. This should take into account who has access to the data, who is

available to complete data collection, and who has the necessary technical capacity. For the VFS Assessment, LIFT worked with

the staff of community-based partners who had experience managing the LIFT referral networks locally and conducting mobile

data collection. It is also important to determine whether multiple individuals will be involved in data collection and, if so,

clearly establishing expectations for the tasks to be completed by each individual to ensure accountability.

• Deciding what kind of training will be required to ensure that data collectors understand their tasks, and can complete them

accurately and on time. When deciding what kind of training will be necessary, you should also think about what format that training

will take. Will a one-on-one orientation be sufficient, or will a more formal group training be required?

• Establishing compensation or incentives for data collectors, and ensuring that these are included in the assessment budget. If staff

or volunteers from the partner organization will be conducting data collection in addition to their normal work duties, you will need

to consider whether an incentive will be required for them to fulfill this role. In many cases, policies will already be in place specifying

when and if compensation should be paid. For example, for all LIFT assessments, incentives could only be paid to government staff

(including health facility staff) if the data collection work was completed outside of their routine work hours and was not part of their

existing scope of work. If there is not an existing policy in place regarding the payment of incentives, it is important that the project

works with the leadership of the partner organization to determine what would be an adequate and appropriate amount to complete

this additional work.

• Establishing a clear schedule of activities for the partnership is important to keep the assessment on track and avoid delays. An

overall assessment schedule should have been developed in the planning phase, and can be used to inform a more specific schedule

of deadlines for all deliverables throughout each phase of the assessment that are specific to the partnership. To support accountability,

each activity and deliverable should be clearly linked to the staff roles and responsibilities already established. In developing the

schedule, it is important to keep in mind the other activities and priorities of the organizations involved. For example, if the project

collecting the assessment data has a new initiative launching during the assessment timeframe, which will involve some of the

individuals supporting the assessment, the schedule should realistically account for the constraints that will impose on assessment

progress.

Page 30: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

29 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Creating Assessment Tools

Before creating assessment tools, a project should research whether there are existing tools that have already been developed to collect the

type of data needed for the assessment. For instance, if the objective is to assess changes in food security, several tools already exist that can

be used or incorporated, such as the HHS which was used by LIFT in the VFS Assessment, as well as many others that assess food security in

different ways. Using tools that have already been validated for your outcomes of interest can bring rigor to your study and streamline the

tool development process. Projects may also be interested in replicating non-validated tools that have been used to measure similar outcomes

in other assessments. However, when choosing to use an existing tool, it is important to understand the specific purpose of that tool to

confirm that it adequately captures the outcome(s) of interest, ensure the planned use is consistent with the tool’s intended use, and to

evaluate its relative strengths and weaknesses before incorporating it into the assessment.

In some cases, existing data collection tools can be tailored to better fit the assessment objectives, data sources, and context. This is particularly

applicable for tools that were initially developed to meet the needs of a specific project or study, which are slightly different than the

incentives and motivation of data collectors

Considerations in determining an appropriate incentive for data collectors include:

• Are there existing incentive policies in place? This might include policies from the project’s donor.

• Is a monetary incentive acceptable or would another type of incentive be more appropriate?

• Will a set sum be offered to all data collectors or will the incentive be based on hours worked or achieving certain targets e.g.

number of data collection forms completed? If target-based incentives are offered, the project should consider how this might

negatively affect data quality.

• What is viable based on the assessment budget?

It is important to have transparency about any incentives offered – particularly financial/cash incentives – and ensure that all individuals

involved in data collection understand the expected work and agree to the incentive amount. This will reduce misunderstandings and

avoid situations where people feel they are not being compensated sufficiently.

Page 31: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

30 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

assessment being planned. However, it is important to keep in mind that most validated tools cannot be adapted without losing their validity.

This means that omitting or even slightly re-framing some questions undermines the rigor of the tool and invalidates the established scoring

framework, which is a significant trade-off. Alternatively, combining multiple related tools may support a better understanding of the outcomes

of interest; however, projects should also consider whether completion of multiple tools may impose an excessive time burden on data

collectors and/or participants.

In some cases, projects will need to create an entirely new data collection tool, rather than adopt or adapt a tool that already exists. For LIFT’s

ART Check, there was not an existing tool that captured the specific data points and outcomes needed to answer the research question. To

create a tool that would be as intuitive to facility-based data collectors as possible, LIFT first reviewed existing clinical ART forms to understand

how the relevant data points were being collected and tracked. LIFT’s ART Check tool was then developed to directly reflect the existing tools

in terms of format and terminology, and included guidance for clinical staff about where on their own forms to find the data requested in

the assessment tool. This approach of first understanding how existing data are being captured is highly recommended to minimize potential

errors or confusion during data collection.

Table 3. Paper-Based Versus Electronic Data Collection Tools

Paper-based tools Electronic tools

Pro

s

• Generally cheaper than using electronic tools, which

require the purchase of mobile devices

• Typically requires less training for data collectors

• Data collection can be completed more quickly

• Incorporation of logical checks, skip patterns, and

setting constraints on responses allows for quality

control during data collection

• Data are automatically captured, eliminating the need

for data entry

Cons

• More labor-intensive, often requiring longer for data

collection plus the additional step of data entry

• Increased chance that data might be lost by misplacing

paper tools, particularly if large amounts of data are

being collected

• Mobile devices for electronic data collection require a

financial investment

• Requires additional training for data collectors to

ensure they are comfortable with the technology

• Project staff must have the necessary skills to create

tools and manage data collection using these systems

Page 32: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

31 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

tool design considerations

Important questions projects should ask as they design assessment tools:

• How long should the tool be? Balancing the project’s interest in collecting all data points that will thoroughly answer your research

question with the need to ensure that data collection is not unduly onerous for data collectors and participants can be a challenge.

• Do the data points adequately reflect the research question?

• What type of question will capture the required data in the most valid and reliable way – consider yes/no questions, multiple

choice, Likert scale responses, or open-ended questions.

• Are the questions framed in a way that minimizes potential bias? Always avoid leading questions such as “Did your food security

improve as a result of the project?” and instead frame questions more neutrally (i.e. “Did you experience any changes in your food

security after getting services from the project?”), then probe for more details, if needed.

• Is the tool sufficiently clear and easy to complete? Are the questions and data points as intuitive as possible for data collectors?

For assessments that require interviews or surveys, will the respondents easily understand the meaning of the questions?

• In what language should the tool be implemented? Does it need to be provided in several local languages? If so, will the responses

then need to be translated into the language used by the project team?

Page 33: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

32 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Clarification of Definitions

As described in component 2, it is important to clearly define your outcomes of interest, and other measures, when conducting assessments,

and to ensure that the definitions you intend to use are aligned with how the information of interest is captured in the assessment context.

For example, how terms such as “adherent” or “women of child-bearing age” are interpreted and recorded in clinical records may have

important implications for the assessment. Consensus on these definitions should also be reached with partners involved in the data collection

to ensure that they are applying the terms consistently and as the assessment design intends. If the data collectors are applying key terms

differently than the project team or differently from each other, this can significantly impact the reliability of the data and the value of the

assessment as a whole.

However, if you find that your measures or outcomes are interpreted in different ways in the assessment context, it may be possible to account

for this within your data collection tool. For example, in the course of preparing for ART Checks, LIFT discovered that certain health facilities

defined ART default in different ways. To address this and other potential consistency issues, LIFT added specific data points to the tool that

would allow for triangulation of data.8F

9 LIFT included a yes/no response question for the health facility staff to indicate whether a client had

ever defaulted since starting ART, and also included a calendar within the tool to allow data collectors to mark: each month a client came to

the facility for care, each month the client picked up medication, and each month a client had defaulted (based on their definition) from care.

Cross-checking these data points allowed LIFT to assess whether clients marked as “never defaulted” based on the yes/no question had also

attended all of their scheduled clinical visits and picked up their medications each month, thereby meeting our intended adherence criteria.

These types of cross-checks or triangulation of data points can provide projects with a simple tool to verify data quality and consistency,

which is particularly beneficial if access to the original data source, such as medical records, is not possible.

9 Triangulation is a technique that supports data validation by cross-checking between two or more data points.

Page 34: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

33 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Figure 1. Snapshot of LIFT’s ART Check tool, highlighting the calendar component, which was used for triangulation with the question “Has

the client defaulted at any time in the past (Y/N)” question.

Has Client Defaulted?

Calendar component

Page 35: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

34 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

component 5: logistics in the field

This section draws heavily on LIFT’s practical experiences to discuss logistics in the field and best practices for implementation of the

assessment.

Data Collection Training

Training Format

The complexity of the assessment, type of data collection, and the skill level of the data collectors will primarily determine the most appropriate

type of training a project will need to conduct prior to data collection. For example, if project staff or professional data collectors are

performing the data collection, and the tool is straightforward, the training required might be minimal. Regardless, for all assessments, LIFT

ensured that everyone involved in data collection – including those supervising – was provided with an orientation to the assessment and the

data collection process to clarify expectations, minimize potential delays, and improve data quality.

When deciding between conducting one-on-one orientations or group trainings with data collectors, projects need to balance a range of

considerations. In LIFT’s experience, group training is preferable; however, in some contexts individual orientations may be more practical or

necessary.

Table 4: One-On-One Orientation Versus Group Training

Considerations One-on-one Orientation Group Training

Cost Varies depending on the number of data collectors and

how geographically dispersed they are. If on-site trainings

can be combined with other project activities near each

site, the added costs for training may be minimal.

Often includes costs associated with venue hire, catering,

participant transport and possibly participant

accommodation/per diem. Cost-effectiveness may be

proportional to the number of individuals being trained.

Time More time-consuming, particularly if there are long travel

distances between data collectors.

More efficient, but requires more logistical planning.

Page 36: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

35 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Considerations One-on-one Orientation Group Training

Consistency Difficult to maintain consistent training standards,

particularly if orientations are led by multiple people.

Questions raised by one data collector may not be

answered/clarified for all other data collectors.

Training is uniform, guaranteeing that all data collectors

receive the same information, presented in the same way.

Group dynamics can support more questions being

asked, and answers will be provided consistently to all

data collectors.

Accountability Expectations and timelines established individually with

data collectors at each site; more variable depending on

relationships at each site.

Establishing expectations in a group setting encourages

broader accountability to assessment standards and

timelines among the whole group.

Peer-to-peer

engagement

Does not allow data collectors from different sites to

engage with each other or share ideas.

Allows for peer-to-peer learning, mentoring and the

sharing of advice to overcome potential challenges and

knowledge between data collectors.

Application of

learning

Depends on the data source. For ART Check, on-site

trainings allowed LIFT staff to support hands on practice

with the tool since the clinical records were on-site.

Depends on the data source. For surveys group trainings

allow data collectors to practice the survey tool through

mock interviews with one another.

Training Materials

Data collection training materials need to be tailored to the purpose and format of the assessment as well as to the skill level of the data

collectors. Projects should consider issues around language barriers, existing comfort with data handling, and opportunities for hands on

learning when developing the training agenda and materials. Components LIFT recommends including in any data collector training or

orientation are:

• Basic introduction that presents the project, clarifies the purpose of the assessment, and outlines the important details for the data

collection (these should be used as touch-points to be referred back to throughout the training);

• Step by step explanation of the data collection process, which highlights the role of the data collectors and expectations for their

involvement. Providing a visual explanation of the data collection process (e.g. in the form of a flow chart) can be useful to clarify how

a data collector’s role fits into the overall data collection activity, promoting increased engagement and accountability.

Page 37: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

36 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

• Orientation to the data collection tool with sufficient time for hands-on practice using the tool and an opportunity for questions

and answers, and possibly minor revisions to the tool, if appropriate. Hands-on practice should be tailored to the training context.

• Explanation of operational issues and next steps, which should include details about when data collection will begin, how and when

to transmit data to the project team, deadlines for completion of activities, and how supervision and trouble-shooting will be provided.

Depending on the complexity of the assessment, projects might consider having someone unfamiliar with the activity review any training

materials prior to the training to ensure that they are clear and easy to follow for an external audience. Input from field-based staff can also

be valuable as they are likely to have more insight into what local data collectors will, or will not, understand.

LIFT also recommends that projects plan to follow-up with data collectors immediately before, or shortly after, data collection begins to

address questions that might have arisen following the training, provide reinforcement of key concepts, and remind data collectors of targets

and deadlines. If a project is collecting longitudinal data, staff should also judge whether refresher training(s) will be beneficial between data

collection rounds; in LIFT’s experience, this is likely to be the case if more than 3 months elapse between periods of data collection.

Data Collection Procedures

Scheduling

You should already have prepared a general schedule for the assessment, as well as a more specific schedule for the data collection phase.

However, it can be helpful to review this with the data collectors (and revise, if needed) prior to the start of data collection. Getting consensus

on the complete data collection process and schedule with your data collectors and those in a supervisory role can also be helpful in promoting

accountability and confirming that the project team’s expectations are realistic. The schedule should take account of:

• Availability of the data collectors and the level of work that is reasonable to expect of them;

• Project staff availability for check-ins and supervision;

• Transport considerations, including the distance to health facilities or other data collection sites, and the mode and availability of

transport for data collectors;

• Safety and security concerns, which might include restricted accessibility of certain regions due to conflict, seasonality or weather-

related issues;

• National holidays or holiday seasons, which can affect staff availability and overall assessment progress

Page 38: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

37 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

• In-country procurement and administrative processes and

how that might affect the timing of activities. For example,

failing to pay data collectors in a timely manner can cause

demotivation, which should be avoided.

Accountability and Supervision

Effective training will form the basis of quality data collection. However,

projects should also think about what additional supports, tools, or

processes to employ to ensure that proper data collection procedures

are followed. Practices LIFT has incorporated into lean assessments

include:

• Creation of instruction guides and “cheat sheets” for data

collectors that provide a summary of key information and

reinforce correct data collection processes;

• Establishing clear timelines and reinforcing them at regular

check-in points over the course of data collection;

• Having a system of supervision or oversight in place. This can

include supervision from project staff, or from the data

collectors’ own supervisors. Supervisors can be used as leverage points to ensure deadlines are met and data collectors follow agreed-

upon processes. The point of contact identified during the planning phase should also be familiar with the data collection process

and timeline to provide support should issues arise. The data collection progress tracker can be used to manage and document the

data collection process; and

• Planning for project staff to review data immediately as it is received to allow an opportunity for errors or inconsistencies to be

identified and addressed with data collectors.

items to consider during initial data review:

• If matching was used, checking whether pairs were correctly

matched on the appropriate data points.

• Review triangulated fields for consistency.

• Check whether the response matches the question type e.g.

Do single response questions have multiple responses

recorded? Do yes/no questions have alternative answers

other than yes or no?

• Assess whether there are any logical errors in the data e.g.

date of ART initiation listed as earlier than the date of HIV

testing.

• If unique IDs were given to participants as part of the

sampling design, check to ensure that IDs recorded during

data collection match those on the sampling list.

• If projects are permitted access to the original data source,

consider conducting spot checks of a sample of clients.

Page 39: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

38 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Addressing Challenges

Working with and through partners can be an effective use of resources to conduct client outcome assessments, but often minimizes a

project’s direct control over the assessment timeline and completion of key milestones. Despite careful planning and implementation, projects

may still encounter challenges and delays in data collection. The challenges described below reflect some of the issues LIFT experienced over

the course of the ART Checks and VFS Assessment. This list is not all-encompassing and projects are encouraged to assess potential risks and

identify appropriate responses based on their own knowledge of the assessment context.

Table 5. Data Collection Challenges and Solutions

Challenge Potential Solution

A data collector fails to submit data according to the

agreed-upon schedule.

The supervisor and/or study PI should check in with the data collector to understand

what the problem is. If necessary, they should then inform the data collector’s

supervisor and request that they reinforce the importance of adhering to the

schedule. If deadlines are still not met, this should be escalated to the primary point

of contact at the partner organization – identified during the planning phase – for

remediation. Where possible, establish a specific and realistic revised timeline for

submission that takes into account their work load or other prohibitive factors;

contact them as often as possible to remind them of the new deadline and assess

progress.

Submitted data are incomplete or of poor quality. Provide additional training and mentoring to address gaps in understanding to

improve data quality. Withhold payment of incentives until issues are resolved.

Projects should make the requirements for payment clear during the recruitment

and training of data collectors; this should include non-payment for incomplete or

inaccurate data.

Page 40: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

39 Client Outcome Assessments // Practitioner Guide

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Challenge Potential Solution

A data collector is absent, sick, or otherwise

unavailable to complete the data collection.

If the budget allows, projects should consider training at least two data collectors

per site so that the workload can be divided and there is continuity in case of one

person becoming unavailable.

Connectivity issues delay data being submitted to the

project staff electronically.

Require data to be submitted to the project staff on a daily basis – this can reduce

the need for transmittal of very large files, which is often not possible in low-

connectivity environments. A system of regularly backing up data to a thumb drive

or secondary device can also provide a safeguard against data loss until connectivity

improves sufficiently so that data can be shared with project staff.

Page 41: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

40

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

component 6: analyzing the data

This section discusses the process for analyzing data from the assessment and the creation of key deliverables.

Data Transmission

During the logistics planning, you will already have decided on the method

and frequency with which data will be transmitted to project staff for

analysis, and will have communicated this to your data collectors. The

frequency and mode of transmission will largely be determined by the

format in which the data are collected – e.g. a paper-based system or

electronically – and by the connectivity available to the data collectors. It is

recommended that data be transmitted from the field to project staff at the

end of each day. This may involve either electronic data being emailed to

project staff, scanning and emailing paper forms, or uploading electronic

data or scanned forms to a cloud-based data sharing platform. If it is not

possible for data to be transmitted to project staff at the end of each day,

it should at least be saved to a back-up device, such as a flash drive, to

prevent accidental data loss.

Data Entry

Once project staff receive data, the next step in the analysis process is to compile what has been received into a dataset, or datasets. If

the data were collected electronically, for example using a mobile data collection platform such as Open Data Kit (ODK) or CommCare9F

10,

10 There are many mobile data collection platforms available for use; ODK and CommCare are two with which LIFT has experience. ODK

is an open-source set of tools for mobile data collection, while CommCare is a platform that was developed and is licensed by Dimagi.

More information on these tools can be found by following these links: https://opendatakit.org/ http://www.dimagi.com/products/

data security

The level of security necessary for data transmission

and storage will be related to the type of data being

captured, and particularly to the sensitivity of that

data. If you are collecting personally identifiable

information (i.e. client names or addresses), or data on

sensitive topics such as HIV status, it is important that

you have a data security plan in place – this will likely

be required as part of an IRB submission – detailing

who will have access to the data and how it will be

protected from unauthorized access. This might

include password-protecting files, encrypting files that

will be emailed, or storing hardcopy forms in locked

filing cabinets.

Page 42: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

41

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

minimal labor will be needed for data entry as these platforms allow users to export data directly into easily-usable electronic formats,

such as .csv. Although this approach involves less manual effort for data entry, it does require staff to have competency with these

systems to ensure that data are exported from the system correctly. There may also be costs associated with some mobile applications

that limit their accessibility to partners.

If data have been collected using a paper-based system, this will require a more time-intensive process to compile the data into an

electronic dataset. The essential steps in this process are:

• Create a tool(s) into which the data will be entered. For each country in which LIFT conducted an ART Check, an Excel-based

tool was developed. These tools were structured to capture all the data points from the paper-based form in a systematic way.

When developing such a tool, it is also important to think ahead to the types of analyses you intend to conduct and the program

in which you intend to conduct these analyses, to ensure that the tool is structured to record the data in a way that makes those

analyses possible. For example, considerations might include:

o How to format key demographic fields, for example should you record clients’ date of birth or their age in months/years;

o Whether a field is needed to connect matched pairs, such as intervention clients and matched comparison clients;

o Whether formula-based fields are needed, for example to calculate scores on measures such as the PPI;

o Addition of a field recording the name of the person completing data entry, which can assist with subsequent error-

checking.

• Establish a protocol for data entry. Depending on the type of data collected, this protocol may be relatively simple. However,

it is important to establish a set of clear procedures in advance of data entry to ensure that all staff involved in entering data are

following consistent processes. This should include instructions for date and number formatting, for example, whether dates

should be entered in the MM/DD/YYY format commonly used in US, or the DD/MM/YYYY format common in most other contexts.

Guidance on how to record missing data points should be included, as well as a clear system for coding responses. Creation of

a formal codebook may be advantageous for more complex datasets.

• Complete data entry. It is important to set realistic expectations for the amount of time it will take for data entry to be completed.

The process of entering data manually can be laborious and setting unrealistic targets increases the risk that entry will be rushed,

increasing the likelihood of error. Consider how many staff have the time and skills needed to complete data entry and balance

this with the overall time constraints of the activity when setting a timeline for data entry.

Page 43: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

42

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

• Conduct error-checking of data. Even if staff involved in data entry exercise extreme care, errors are almost unavoidable with

manual data entry. Error-checking is recommended prior to proceeding with analysis. Strategies for error-checking include:

o Have two people enter the same data and then conduct comparisons to look for discrepancies between the two;

o Randomly sample data points to be checked for accuracy. If errors are routinely connected with a particular staff member,

this would suggest that all data entered by that individual should be reviewed;

o If data are entered in Excel, the data validation tool can be employed to set restrictions on the data that can be entered

in a particular cell (e.g., age field will only accept numeric characters between 0 and 100).

Data Cleaning

It can be tempting to assume that your carefully implemented processes for collecting

and entering data will have created a dataset that is immediately ready for analysis.

However, data cleaning is an important next step in the data analysis process. Cleaning

involves identifying and correcting, or removing, inaccurate or unusable records from a

dataset. Even data collected electronically may contain errors so it is important that all

datasets undergo cleaning prior to analysis taking place.

This guide is not intended to provide in-depth instruction on data cleaning. All statistical

software packages have particular commands that can be used in data cleaning and there

are many existing resources that provide guidance on this topic, which are readily

available online10F

11. However, these important lessons were learned by LIFT during data

cleaning:

11 Resources on cleaning data can be found at the following links: http://data.library.utoronto.ca/cleaning-data-stata;

https://www.sas.com/storefront/aux/en/spcodydata/61703_excerpt.pdf;

http://www.betterevaluation.org/en/resource/guide/12steps_data_cleaning

why create a data cleaning

protocol?

• Data need to be complete, accurate,

reliable, and valid.

• If we don’t clean data, we might make

inaccurate or biased conclusions.

• Reduces the likelihood that time will be

wasted on repeating analyses with

datasets if errors are discovered once

analysis has begun.

Page 44: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

43

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

• Ensure that your data cleaning protocol reflects your established inclusion/exclusion criteria. For example, if your research protocol

dictated that data would only be collected from adult (age 18 and above) health facility clients, mistakes may have been made

in the field whereby data were collected from an individual under age 18. When cleaning your data, be sure to check for cases

that do not meet your inclusion criteria and exclude these from future analysis.

• If your research involves matched pairs, the cleaning process should include assessing whether pairs are correctly matched

according to all of the criteria set in your research protocol. If the pair was not correctly matched, it should be excluded from

analysis.

• Establish how you will handle missing data. For some data points it may be acceptable to record data as missing, or you may

consider employing statistical methods for handling missing data (if your project staff has the necessary statistical expertise for

this). It is particularly important at the cleaning stage to determine which data points are essential, and for which missing data

would result in exclusion of that record.

• If you are collecting time series or longitudinal data, be sure that your data cleaning process remains consistent over time.

Data Analysis

Your data analysis plan should be directly related to the research question you want your data to answer, and should be firmly established

before data collection begins. Having an analysis plan prevents projects from conducting analyses at random until they find a useful or

interesting result. That approach to data analysis often yields biased, invalid or inaccurate results, and should always be avoided.

The type of data collected, the complexity of the dataset, and the planned analyses will largely dictate the type of software a project will

need to use to analyze its data. The two main options to consider are Excel or a statistical software package, such as SAS, Stata or R.

Page 45: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

44

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Table 6. Considerations for Data Analysis Software

Excel Statistical software Pro

s

• More widely used and accessible to those

without a statistical background

• Generally sufficient for analyzing small or simple

datasets

• Readily available at low or no cost

• Allows for more complex analyses

• Less likely to generate calculation errors as the analysis code

is easy to review (however, human error in selecting

incorrect analysis functions still possible)

• Once your program has been developed, it can be used to

consistently analyze multiple datasets

• Rapidly produces tables and visualizations of summary

statistics, greatly expediting data cleaning and analysis

Cons

• Limited capacity to conduct complex analyses

• Difficult to identify errors resulting from typing

mistakes in formulas used for calculations

• Time-consuming to conduct the same analyses

with multiple datasets

• Lack of analysis code that allows others to

repeat analyses with your data set, which can be

helpful to ensure calculations and analyses were

accurate

• The cost can be prohibitive as most programs are available

only with purchase of a license

• Though help – such as guides and tutorials – are easily

available online, software requires at least some statistical

programming knowledge to understand and use

When conducting lean assessments, it is important to be realistic about what your analyses can tell you, and what conclusions you can

draw from the data. You often will not be able to generalize your results beyond the study population, draw direct or causal connections

between your project and the outcomes measured, or generate statistically significant results. However, lean assessments can still yield

important and useful results if you plan and conduct your analysis well.

Page 46: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

45

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Some of the analytical approaches used by LIFT during ART Checks and the VFS Assessment involved:

• Pre-post analysis: As noted above, the VFS Assessment used a pre-post design in which data on food security and economic

vulnerability were collected from referral client households prior to receipt of a referral (pre-test) and again at one year following

completion of the referral (post-test). Analyses were then conducted to compare each individual household’s pre-test scores on

these validated measures of food security (HHS) and vulnerability (modified PPI) to their post-test scores on the same measures.

• Changes over time: In select countries, LIFT collected ART Check data on the same individuals at multiple time points to create

a longitudinal dataset. In these cases, LIFT calculated the proportion of clients who were reported to have ever defaulted from

ART at each time point, and compared those proportions over time for referral clients vs. the whole health facility population.

This method allowed LIFT to assess whether there was a difference in the risk of defaulting between the sampled clients who had

received a referral as a part of a LIFT-support referral network, and the overall risk of default among all ART clients in that health

facility.

• Matched pairs comparisons: Where possible, LIFT collected ART Check data on referral clients and matched comparison clients.

In these cases, LIFT compared the proportion of referral clients who defaulted during the study window to the proportion of

matched comparison clients who defaulted over the same period. This approach allowed LIFT to understand whether the sampled

referral clients had a different risk of default than demographically-similar clients who had not received a referral.

• Qualitative analysis If qualitative data are collected as part of the assessment, such as through client or stakeholder interviews,

or focus group discussions, they can be analyzed and coded based on themes relevant to the research question and objectives.

importance of context

Pre-post analysis can be particularly useful if projects are looking to measure potential changes in beneficiaries’ outcomes of interest

after exposure to a particular intervention. However, with this type of analysis, it is especially important to be aware of other non-

intervention-related changes between the pre-test and post-test, and how these contextual differences might impact results. For

example, post-test data collection for the VFS Assessment was designed to take place one year after the pre-test to control for

routine changes in food security across the farming calendar; however, the el Niño drought severely impacted harvests in many

areas of Lesotho in the post-test year. This change in the agricultural context was something that LIFT then had to document and

take into account when interpreting the results.

Page 47: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

46

introductio

n develo

ping

a

research questio

n

assessment

desig

n

ethical co

nsideratio

ns prep

aration

logistics in the

field

data analysis

Sharing Results

The final consideration in the data analysis process is determining how results may be shared. As was discussed at the beginning of this

guide, developing a strong research question is the key to conducting good lean data collection. Your research question will also guide

how your results can and should be used. For example, if your research question was solely directed to understanding a particular

component of your project in one site, the results may be less relevant to other settings. Having a strong and specific research question

will also help you to be clear about what your results can and cannot show, which is particularly important when it comes to sharing

those results. Generally, the results of lean data collection activities may be used to:

• Inform project design: This might include informing adaptations to the project itself, helping to determine which elements of a

project should or could be scaled-up, or providing evidence to guide the design of similar projects in future.

• Provide knowledge: While it is unlikely that the results of lean data collection methods will be able to yield generalizable

knowledge, this does not mean that the results are not informative. When layered with routine project data, the results of lean

data collection methods can help to explain why a project succeeded or failed to achieve anticipated results, or can identify

unexpected outcomes of an intervention. In addition, they may add to existing research on the same topic, contributing to

building an overall evidence base linking your project interventions with the outcomes of the study.

Many projects will be familiar with writing a report describing the results of a data collection activity; however, these reports are often

not seen outside of an immediate group of project stakeholders. While such reports are important, with the growth of fields such as

implementation science and operations research, there is increasing interest surrounding the evidence yielded by lean data collection

methods. Therefore, projects should also consider sharing such results in the form of conference posters and presentations, webinars

among practitioners in their field, or writing and publishing case studies. This allows the project to share lessons from the assessment

process, as well as the assessment results more widely. Additionally, projects should always share assessment results and findings with

the community and local stakeholders, for whom the results are likely to be particularly interesting, relevant, and valuable.

Page 48: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

annex: practitioner toolkit

• tool a: assessment design considerations checklist

• tool b: scheduling template

• tool c: budget template

• tool d: data collector training agenda

• tool e: data collection process diagram

• tool f: job aid/cheat sheet example

• tool g: data collection progress tracker

• tool h: data analysis template

• tool i: ART adherence and retention check tool

• tool j: VFS tool

Page 49: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool a: assessment design considerations checklist This is a list of questions and considerations to guide projects in thinking through critical aspects of the assessment purpose and

design. It can be used throughout the design phase to document deliberations and important decisions on a range of issues. Issues to Consider Notes/Responses

Developing Research Question Ideas:

Is there a question your project needs or wants to answer?

What would you like your project to be able to

demonstrate?

Are there aspects of your project that you are curious

about that are not currently being measured?

Are there ways your project can contribute to closing a gap

in evidence related to the work you do?

What is your working research question?

This should be a focused and clear question that you

expect your assessment to answer.

Refining the Research Question (see Table 1 for examples):

Is it an empirically testable question (e.g. answerable with

data)?

Yes No

If No, revise to make this a question your project can answer with

data

Is the scope related to your project operations and

activities?

Yes No

If No, revise to make the question specific to your project activities

Page 50: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

Issues to Consider Notes/Responses

Are all aspects of the research question specific?:

population you are interested in independent variable (i.e.

the intervention or program component you are testing)

outcomes of interest geography

Yes No

If No to any, refine until each of these are specific and clear

What is your final research question?

What Data Could Answer the Research Question?

Are you aware of existing datasets that could answer your

question?

Yes No

If Yes, what are they?

Do you have access to them? Yes No

If you do not have access to the data, who does?

Can you get them to share the data with you?

What would this require?

Yes No

Is primary data collection required or preferred? Yes No

If yes, from whom would you collect data (i.e. project

clients, other stakeholders, etc.)

What are the main data sources you are considering for

this assessment?

Which Data Sources are the Best Fit?

Which data are more or less valid?

Which data are more or less reliable?

What is most feasible for your project to collect?

What is/are your selected data source(s)?

Page 51: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

Issues to Consider Notes/Responses

What type of data will be collected? Qualitative Quantitative Both

Will the Assessment be Prospective or Retrospective?

Are there past/existing datasets that already include the

data you need?

Yes No

If No, a retrospective assessment is not possible

How far into the future could your project collect data?

Is that long enough to answer your research question? Yes No

If No, rethink the feasibility of this data source

Will you have a prospective or retrospective design, or a

mix of both?

Prospective Retrospective Both

Will the Assessment be Longitudinal or Cross-Sectional?

How many data collection points will your study include?

If only one data collection point, what is the planned

timing?

If multiple data collection points, what is planned

frequency of data collection (e.g. every X weeks/months)?

For how long/over what period?

Will your assessment be longitudinal or cross-sectional? Longitudinal Cross-sectional

Will You Use a Comparison Group?

Is there at least one appropriate comparison group that

will contribute to the findings of your assessment?

Yes No

If Yes, who are they (i.e., what population would you draw

them from)?

Page 52: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

Issues to Consider Notes/Responses

Will collecting data from this group be feasible and ethical? Yes No

Will your assessment include a comparison group? Yes No

If Yes, what are the characteristics of the comparison

group?

How Will You Sample Participants?

What is the total sample size needed for your study?

Is that feasible for the project to collect? Yes No

If No, consider whether a smaller sample can meet the

requirements of the assessment

Will you use convenience, purposive or random sampling

of participants?

Convenience Purposive Random

If using a comparison group:

How will they be sampled? Convenience Purposive Random

Will you use matching? If Yes, by what

criteria/characteristics (sex, age, etc.)?

Yes No

How many comparison clients will be included for each

program client?

Page 53: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool b: scheduling template This is an illustrative Gantt chart that can be adapted to develop a schedule for an assessment. It can be used to think through the

processes and timing of each of the phases and specific activities that are required.

ACTIVITY W1 W2 W3 W4 W1 W2 W3 W4 W1 W2 W3 W4 W1 W2 W3 W4 W1 W2 W3 W4 W1 W2 W3 W4Develop research questionAssessment design

Select data source(s)Make design decisions

PreparationDevelop schedule and budget Obtain local approvals Identify and orient data collection partnerSelect/develop data collection tool(s)

IRB submission and processingLogistics

Develop training materialsTraining of data collectorsPrinting of data collection tools (if required)

Data collectionSupervision/check-ins

Data analysisData entry/data verificationData cleaningData analysis

Reporting/sharing results

Month 6Month 1 Month 2 Month 3 Month 4 Month 5

Page 54: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool c: budget template This is a budget template to guide projects in preparing a budget for assessments. Not all line items included in this template will be

necessary for all assessments, and some assessments may require additional line items be added. Therefore, users are encouraged to

tailor this template, or use existing organizational budget templates, in order to create the most comprehensive and accurate budget

for their assessment.

Costs

Cost Element Rate/Day Days Total

I. Salaries and Wages (Subject to Benefits)

Staff 1

Staff 2

Staff 3

CPA at [insert rate]

Total Estimated Salaries/Wages

II. Benefits Percent Based On Total

Fringe Benefits Rate

Total Fringe Benefits

Page 55: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

III. Consultants/Stipends (Staff without Benefits) Rate/Day Days Total

Consultant 1

Consultant 2

Consultant 3

Total Consultant/Stipends

IV. Travel and Transportation Rate Units Total

Flights (Origination city - Destination city)

Hotels

Meals and Incidentals (M&IE)

Ground Transportation

Visas

Per diem (project staff)

Per diem (consultants)

Page 56: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

Vehicle costs (rental and fuel)

Transportation reimbursement

Total Travel and Transportation

VI. Other Direct Costs Rate Units Total

Printing & copying

Stationery

Communications (phone, internet)

Tablets

Mobile phones

Thumb drives

Room rental (for training)

Food (for training)

Total Other Direct Costs

VII. TOTAL DIRECT COSTS (I. - VI.)

Page 57: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

VIII. Indirect Costs Percent Based On Total

Indirect Cost Rate (specify)

Total Estimated Indirect Costs

IX. TOTAL PROJECT COSTS (VII. - VIII.)

Page 58: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool d: data collector training agenda

This is an example training agenda for assessments’ data collectors. The agenda assumes that one tool and mobile data collection will be

utilized in the assessment. It is recommended that users adapt this tool to their assessment design, extending or condensing the timeframe

as appropriate; however, LIFT recommends that each of the agenda items reflected in Day One of this agenda be included in all data

collector trainings.

Training Objectives:

- List out the primary objective or learning outcomes of the training here

TRAINING DAY ONE

Time Agenda Item Objective Materials

9:00am Introductions & Overview

of the Assessment

Understand each person’s role and review the

purpose and timeframe of assessment activities

• Introduction Slides

9:30am Project Overview Understand the project history and activities • Project Overview Slides

10:30am BREAK

10:45am Ethics Review Understand the rights of participants and best

practices for data collection

• Ethics Slides

11:30am Overview of Data

Collection and Role of

Data Collectors

Understand the purpose of each tool. Discuss

roles and responsibilities of data collectors

• Paper copies of Tools

• Assessment Process Slides

Page 59: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

1:00pm BREAK

2:00pm Review Tool A12 13 Review each question on the English version of

Tool A

• Paper copy of Tool A

• Excel projection

3:00pm Tool A Practice Work in pairs to practice using Tool A • Paper copy of Tool A

3:30pm Tool A Discussion Discuss questions about Tool A that arose during

practice

• Paper copy of Tool A

4:00pm Review Review topics discussed over the course of the

day and provide agenda for Day Two of training

• Summary Slides

12 If mobile data collection is going to be conducted, LIFT recommends that data collectors are first trained to use the tools in paper form. This allows

data collectors to become comfortable with the tool itself, before addressing the additional skills require to use mobile devices and mobile data collection

apps.

13 This agenda assumes that only one tool is utilized throughout the assessment. If a project plans to conduct an assessment with multiple different tools,

the training timeframe and agenda should be expanded accordingly.

Page 60: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

TRAINING DAY TWO

Time Agenda Item Objective Materials

9:00am Day One Review Review topics discussed during training Day

One

• Summary Slides

9:30am Introduction to Tablets and

Mobile Data Collection 14

Gain basic familiarity with devices and

mobile data collection app

• Data collection devices,

loaded with appropriate app

10:30am BREAK

10:45am Introduction to Tablets and

Mobile Data Collection (cont’d)

Gain basic familiarity with devices and

mobile data collection app

• Data collection devices,

loaded with appropriate app

12:30pm BREAK

1:30pm Review Tool A on data

collection devices

Review the English version of Tool A using

data collection app, including how to

select/type responses

• Devices loaded with Tool A

2:30pm Practice Tool A on data

collection devices

Work in pairs to practice using Tool A on

data collection app

• Devices loaded with Tool A

4:00pm Tool A Discussion Discuss questions that arose during practice

using Tool A in data collection app

• Devices loaded with Tool A

4:30pm Next Steps Discuss when data collection will begin,

review processes and provide information on

how to obtain support for trouble-shooting.

• Summary Slides

14 This assumes data collectors are new to mobile data collection. If more experienced data collectors have been recruited, this component

may be condensed so that Day Two of the training is completed in half a day.

Page 61: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool e: data collection process diagram

This is a diagram of high-level steps in the data

collection process, which projects can use as a

reference for data collectors. Projects should adapt

this process diagram to their particular data

collection process.

Page 62: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool f: job aid/cheat sheet example

This is the outline of a data collector “cheat sheet” which projects can tailor to their assessment, and provide to data collectors to serve

as a simple reference during data collection.

Assessment Name: Insert the name of the assessment here.

Name and Contact Details for Assessment Lead(s): Insert the name and contact details of the assessment leads and/or the people to

be contacted in case of questions and issues during data collection.

Purpose of Assessment: Provide a 2-3 sentence summary of the purpose of the assessment.

Tools and Materials: Provide a summary of the tools and materials that will be provided to each data collector, and a short explanation

of how each should be used. This is particularly important if there is more than one data collection tool to be used. Example: Tool A is a

6-question survey which asks health facility staff self-report questions about how the project has impacted their clients. This tool should

be used to interview the project volunteer at each health facility where you are conducting data collection. Tool B is a 10-question record

check tool, which should be completed through a review of the clinical records of all clients on your sampling list.

Assessment Schedule: Provide a timeline with key dates, including the number of each tool to be completed at each location (e.g. “20

clinical record checks should be completed at each health facility, per the sampling list you have been provided”), the deadline for

completion of data collection and the dates of any check-ins.

Ethics: Depending on the assessment design, it might be beneficial to add short reminders about important data collection ethics

considerations. This could include: a prompt to obtain informed consent, reminders about data security best practices and who should be

accessing sensitive information, or instructions about how to deal with identifiable information (e.g. “Do not record client names, phone

numbers, addresses, or other identifiable information on any of the tools.”)

Instructions: Insert simple, clear instructions for how to complete the data collection tool(s). Depending on the tool(s) you will be using

in your assessment, these instructions may require more or less detail. For example, a client questionnaire may require a questionnaire

guide, which provides an explanation/clarification for each question. Whereas, a tool to collect data from a record review may require less

in-depth instructions (see below for an example of this from LIFT’s ART check).

Page 63: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

You will use each form of Tool A to record the data for each client (Note that LIFT staff and data collectors cannot do this because

we should not have access to any information that can be used to identify clients).

At the top of each copy of Tool A fill in:

• The date when this form is filled,

• Which staff member(s) are completing the form, and

• Circle the name of the facility where you are checking the client records

Complete Columns A-L as follows:

A. Copy the client ID # from your sample list to box A (this should be the same ID # as what is recorded on the client’s referral

registration form and does not contain any identifiable information.)

Columns B-F require the client’s registration/referral record.

B. Write the client’s sex

C. Write “Yes” or “No” for whether the client reported knowing their HIV status at the time of registration/referral

D. Write “Yes” or “No” for whether they reported being HIV+

E. Write “Yes” or “No” for whether they started ART in column E

F. Write “Yes” or “No” for whether they reported taking all of their ART in column F

Columns G-L require the client’s ART yellow card, MasterCard or computer record.

Depending on your facility, some information may come from different sources.

G. Write the client’s age in years

H. Indicate if you can find the client’s yellow ART card/s and/or MasterCard. Write “Yes” if you can find the card (or cards) and

“No” if you cannot find any ART record for the client

i. Please write a note at the bottom of Tool A if client has multiple yellow ART cards

I. Indicate if the client is HIV+. Write “Yes” if client is HIV+ and “No” if client is HIV-

J. Indicate if client has ever defaulted on their ART treatment in the past. Write “Yes” if client did and write “No” if client has

always visited the NCST site to pick up their ART

Page 64: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

K. Indicate the date the client initially tested positive for HIV in day, month, year format

L. Indicate the date client initiated ART in day, month year format

ii. If client has multiple yellow ART cards indicating multiple ART initiation dates, make a note at the bottom of Tool A. Write

the EARLIEST ART initiation date in box I. For example, this could be the case if the client yellow ART card is DUPLICATE.

Using the calendar at the bottom of Tool A, indicate the following by marking an “x” in the appropriate boxes for:

• Each month when the client visited the ART Unit at your health facility, as indicated on their yellow ART card, MasterCard or

computer system

• Each month when the client visited the health facility TO PICK UP ART MEDICINE

If a month was a skip month (when the client was given multiple month’s ART and therefore did not travel back to the

facility), please write this in the notes section.

• The ONE month when the client received their referral to/from the health facility

Frequently Asked Questions: If there were key questions which came up during the data collectors training, or that you think are likely

to be recurring issues, it may be helpful to list these here.

Page 65: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool g: data collection progress tracker

This is an adaptable tool that projects can provide to data collection teams to track the progress of data collection taking place during

assessments. The tool can be edited in order to include multiple tools and data collection locations. Depending on the scale of the

assessment, projects could format this tool to be used at the individual or data collection team level.

Instructions

1. Project staff should complete the information on rows A-C before sharing this tool with the data collection team. The numbers in row

C should sum to the total provided in row B;

2. The lead data collector should complete row D at the end of each day of data collection, recording the total number of tools completed

at each facility over the course of data collection to date;

3. The lead data collector should complete row E at the end of each day of data collection, recording the total number of tools that cannot

be completed at each facility over the course of data collection to date;

4. Row F has been programmed to automatically calculate based on the numbers input into rows C through E;

5. For each form recorded as "unable to be completed" in row E, the lead data collector should provide an explanation in row G for why

that form could not be completed, along with the ID number for that client.

Page 66: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

A. Assessment Title

Insert assessment title here

B. Forms to be completed

List the names of the forms to be completed here Progress Tracking

Facility Name Example Facility Facility A Facility B Facility C Facility D Facility E Tool 1 Tool 2 Tool 1 Tool 2 Tool 1 Tool 2 Tool 1 Tool 2 Tool 1 Tool 2 Tool 1 Tool 2

C. Total number of

forms to be

completed, per tool

per facility

32 20

D. Number of forms

completed to date,

per tool per facility

8 13

E. Number of forms

that cannot be

completed, per tool

per facility

(explain in Notes)

3 2

F. Number of forms

remaining to

complete, per tool

per facility

21 5

Notes:

Provide a space here for data collectors to explain why individual forms could not be completed e.g. Client 001 at Facility A had transferred

to another facility and their records could not be found for Tool 1 or Tool 2.

Page 67: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool h: data analysis template This is an Excel-based data entry template that the LIFT project utilized when completing data entry for its ART Check assessments. If

projects are considering creating a similar tool for their own assessment, they are encouraged to map each question or data point on

the assessment tool(s) to a column in the Excel workbook. Additionally, if projects intend to analyze assessment data using statistical

software, they should determine how data points or variables must be recorded and formatted for that software before adapting this

tool accordingly. Projects are also recommended to use Excel’s ‘Data Validation’ feature, which allows the data entry template creator to

restrict the type of data or values that can be entered into a cell, reducing the potential for data entry errors.

Due to the size of the Excel sheet the tool is housed on the LIFT II website and can be found HERE. Or by clicking on the title of the tool.

Page 68: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool i: ART adherence and retention check tool

This is an English-language version of the clinical record check tool utilized in LIFT’s ART Check assessments. In each country this tool was

adapted based on alignment of data points with existing tools and records, local terminology and definitions, and lessons learned from

previous assessments.

Page 69: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

tool j: VFS tool

This is the pre-post survey utilized in LIFT’s Vulnerability and Food Security Study in Lesotho. It is comprised of Household Hunger Scale

and adapted Progress out of Poverty Index questions, in addition to project-specific questions, to gather information on referral client

household’s vulnerability and food security.

When you’ve successfully contacted a client, before asking them the survey questions, first read the following so that they understand

why you are contacting them:

READ: Hello, I [name of data collector] work for [name of organization] in support of the LIFT Project. I am contacting you now because

you were given a referral from one service provider to another in [District] at some point in the past. If you recall, the LIFT project helped

organizations create the referral systems in 2014 and 2015. We are now following-up to understand what you liked and did not like about

the referral, as well as what things have changed in your lives since the referral. The questions we ask may sound familiar, which is alright.

The same questions were asked to you a year ago, but now we are asking them again to see what changed.

Key Information you should know:

• The only risk you have from participating in this interview is that some questions may make you feel uncomfortable.

• You will not receive any direct benefits from the interview, but we hope that your honest feedback can help us improve services

in your community.

• You are free to decide if you want to be in this interview—you may stop it at any time

• We will not collect your name and will protect information you provide

• You will not receive any payment for this study

• You may refuse to answer any question

Page 70: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

If you choose to participate in the interview, our conversation should last no longer than 30 minutes.

1. Do you agree to participate in this short interview?

a. Yes

b. No

2. How many members does your household have?

a. Ten or more

b. Six to nine

c. Three to five

d. Two

e. One

3. The roof of your main dwelling is predominantly made of what material?

a. Grass

b. Anything besides grass

4. What is your main source of lighting fuel?

a. Collected firewood, grass, or other

b. Parrafin

c. Purchased firewood, electricity, gas, battery/dry cell (torch), or candles

5. What kind of toilet facility does your household use?

a. Flush toilet

b. Ventilated, improved latrine

c. Traditional latrine with roof

d. Traditional latrine without roof

e. None

Page 71: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

6. What is your main source of cooking fuel?

a. Collected firewood from forest reserve, crop residue, sawdust, animal waste, or other

b. Collected firewood from unfarmed areas of community

c. Collected firewood from own woodlot, community woodlot, or other places

d. Purchased firewood

e. Paraffin, charcoal, gas, or electricity

7. Does the household own a tape player, CD player, radio or mp3 player?

a. No

b. Yes

8. How many separate rooms do the members of your household occupy, not including bathrooms, toilets, storerooms or

garages?

a. One

b. Two

c. Three

d. Four

e. Five or more

9. Can the female head of household/spouse read a one-page letter (in any language)?

a. No

b. Yes

10. How many household members worked (as their main activity) in the past seven days as a farmer?

a. None

b. One

c. Two

d. Three

e. Four or more

Page 72: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

11. Does the household own any irons (for pressing clothes)?

a. No

b. Yes

LIFT Score (will be calculated based on responses to 2-11 above)

12. In the past [4 weeks/30 days], was there ever no food to eat of any kind in your house because of lack of resources to get

food? If yes, how often did this happen?

a. No

b. Yes -- Rarely (1-2 times)

c. Yes -- Sometimes (3-10 times)

d. Yes -- Often (more than 10 times)

13. In the past [4 weeks/30 days], did you or any household member go to sleep at night hungry because there was not enough

food? If yes, how often did this happen?

a. No

b. Yes -- Rarely (1-2 times)

c. Yes -- Sometimes (3-10 times)

d. Yes -- Often (more than 10 times)

14. In the past [4 weeks/30 days], did you or any household member go a whole day and night without eating at all because there

was not enough food? If yes, how often did this happen?

a. No

b. Yes -- Rarely (1-2 times)

c. Yes -- Sometimes (3-10 times)

d. Yes -- Often (more than 10 times)

HHS Score (will be calculated based on responses to 12-14 above)

15. At the time of your referral, did you understand how a referral could help you?

a. No

b. Yes

Page 73: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

16. Did you find the information provided to you when you were referred useful?

a. No

b. Yes

17. Did you know where to go?

a. No

b. Yes

18. From your perspective, have you noticed a change in your or your family’s health or nutrition that you think is a result of

participation in the referral system?

a. No

b. Yes

19. Do you believe that referral participation will help improve your or your family’s health or nutrition over time?

a. No

b. Yes

20. What kinds of improvements do you anticipate? (Select all that apply)

a. Extra money to pay for transportation to hospital

b. Accessing services at the health facility for the first time

c. Better able to attend all scheduled appointments

d. Extra money to pay for medication

e. Better able to take medications all the time

f. Ability to buy better quality food

g. Ability to buy more food

h. Re-enrolling in care after being lost to follow-up (LTFU)

i. Receiving regular nutritional counselling and/or support

j. Other [INSERT RESPONSE]

21. Would you go through the referral process and get another referral for another service again if you had the opportunity?

a. No

b. Yes

Page 74: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

22. Please tell me anything else you’d like to note which hasn’t been covered in this survey – this can be about how you feel

referral participation has impacted you or will impact you, the referral process, what you learned or what LIFT should change to

improve the system, etc.

a. [INSERT RESPONSE]

Thank you for taking the time and allowing us to ask you these questions! Let me reiterate that we have not collected any information

that can be used to identify you. Your responses are anonymous and safe. If you have any questions, feel free to ask me now.

Page 75: Client Outcome Assessmentstheliftproject.org/wp-content/uploads/2017/04/ART... · typical donor-funded project budgets and timelines than experimental ... research designs such as

www.theliftproject.org