design for complexity, using evaluative methods

Post on 20-Feb-2017

99 Views

Category:

Data & Analytics

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

How to design programs that work better in complex adaptive systems

Presented at Australasian Evaluation Society Annual Meeting, Perth, September 2016

Ann Larson, PhDann.larson@socialdimensions.com.au

Presented as part of a session titled:

Effective proactive evaluationHow can the evidence-base influence the design of

interventions?

John OwenAnn Larson

Rick Cummings

AES Annual Conference, Perth, 2016

Outline

1. Good design principles for creating change in complex settings derived from evaluation findings

2. Reasons why so many designs not incorporate complexity-sensitive design features

3. Examples of good and bad strategies to incorporate into designs

4. Last word on the role of evaluators

What works in creating positive change in complex settings?

Evaluations tell us what really happened. The accumulation of evaluation findings amounts to an evidence base that is more sensitive to context

than research findings.

• Obtain flexible, long term funding – because change is not linear, continuous nor predictable

• Situate new behaviour in relevant history and saliency of priorities and concerns– because cultures and organisations remain committed to their ways of working

• Build coalitions around a vision for change – because external shocks will require new partners to support long term change

• Understand different actors motivations for behaviour change: introduce accountability and incentives

• Start small, be flexible and experiment before attempting wide-scale change• Balance the need for fidelity with opportunities for lots of local initiative to

promote genuine institutionalisation• Monitor, review and act in a timely manner to be adaptive

If it was easy, everyone would do it.

Organisational blind spots make these factor difficult to integrate into designs

• Command and control culture of central planning• Structural limitations of processing and responding

to large amounts of data with nuanced implications• Epistemologies, especially in the health sector

Designing interventions to change systems are more difficult when there are profound distances between the designer and the intended beneficiaries – whether that distance is geographic, cultural, economic, religious or linguistic.

Evaluation methods can ‘translate’ beneficiaries voices so that decision makers can understand

And human nature is also a barrier

Some common approaches to deal with complexity …that are not working

• Careful planning does not reduce the likelihood of programs encountering unexpected obstacles and opportunities

• Reliance of monitoring and evaluation plans with high level annual review to guide program implementation and oversight does not facilitate timely adaptation when programs are not working well

• Emphasis on celebrating success rather than learning from failure makes it hard to recognise what needs to be changed

• Use of short time frames and rigid budgets to reduce risk actually makes it more difficult to achieve an outcome

And examples of good design trends, using evaluation skills and knowledge

Heavily invest in regular review, such as those required employed by TAF/DFAT, USAID’s active monitoring

… but may require a large investment in time and may be overly structured for simple projects or projects that do not usually encounter problems

Determine where the bottlenecks for adoption are and delegate responsibility at that level, giving lots of local autonomy, coaching and fostering self-organisation

May require tailoring approaches for different areas or different types of facilities

Learn from good pilots and use the inform to expand, while continuing to provide necessary support

One such example is on the next two slides …

Traditional expansion

Sarriot et al 2011 Health Policy and Planning

vs staged expansion capitalising on lessons learned

Payment by results gives the responsibility to implementing agencies and communities to find their own solutions by giving people incentives to do what they want to do. This gives implementers the incentives to experiment until they find a successful strategy. Only works when there is untapped capacity and a meaningful outcome to be achieved

Also requires investment in verification

Modelling of likely scenarios during design and at critical junctures using:

• Human-centred design or• Agent-based modelling or• Complex system modelling

All approaches need insights from evaluation.

A simpler approach is to model path dependency, looking at the conditions

necessary for the design to work

They sentenced me to twenty years of boredomFor trying to change the system from within Leonard Cohen

Evaluators do not need to become implementers internal to an organisation to promote designs that are responsive to complex contexts

As external actors we can point to evidence that flexible, adaptive approaches are more likely to achieve sustainable change for the better.

Last words about the role of evaluators

top related