beta

You're viewing our new website - find out more

Publication - Guidance

The 5 Step Approach to Evaluation: Designing and Evaluating Behaviour Change Interventions

Published: 31 May 2016
ISBN:
9781786522429

Updated, easy-to-use guidance describing how to use the 5 Step approach to design and evaluate behaviour change interventions.

43 page PDF

1.5MB

43 page PDF

1.5MB

Contents
The 5 Step Approach to Evaluation: Designing and Evaluating Behaviour Change Interventions
Background: The tricky business of assessing impact in a messy world

43 page PDF

1.5MB

Background: The tricky business of assessing impact in a messy world

How was this pack developed?

This pack has been developed by Scottish Government researchers in Justice Analytical Services in collaboration with stakeholders in other organisations, with the aim of promoting and supporting effective evaluation. Individuals in the following organisations provided invaluable feedback on multiple drafts of the guidance:

  • The Robertson Trust
  • Evaluation Support Scotland
  • Coalition of Care and Support Providers in Scotland

A Scottish approach to evaluation

Co-production
Our approach to evaluation enables funders and service providers to work together in pursuit of their shared aims - to improve outcomes for service users and communities. The 5-step approach also engages with service users' views as a resource for evaluation rather than seeing users solely as an object to be measured.

Asset-based
The 5-step approach focuses on ways in which evaluation is possible for services of any size, rather than expecting all services to use an experimental evaluation method which may not be appropriate or possible for smaller, community-based organisations. The 5-step approach allows even the smallest service to demonstrate the contribution they are making to change.

An Improvement Culture
Evaluation enables improvement and even the most successful service can always be developed further. Furthermore, with the 5-step approach, evaluation is an on-going process, not something to be saved for last. This means that services can be continually improved in order to best meet the needs of their users.

How do you know if you are making a real difference to users (making an impact)?

It's not easy to find out if you're making a real difference to people, especially in the chaotic real world. There are 100s of variables which can effect people's attitudes, motivations and behaviour. So how can you tell if your project is making any difference?

Researchers and scientists generally agree that BEST way to determine if your project or service has made a difference is to use a randomised control trial (RCT), sometimes referred to as an "impact evaluation" but these are not easy to do in practice, especially in a complex social setting.

What are impact evaluations / RCTs?

What is an impact evaluation or RCT?
An impact evaluation or RCT is a much like a scientific experiment. One group (the 'treatment' group) experience your intervention and one group (the control group) does not. You then compare the outcomes for both groups to see if your intervention made any difference. In other words, if you really want to know if you've made a difference, you need to know what would have happened if the same (or similar) users DIDN'T receive your service. This enables you to ATTRIBUTE changes in users to YOUR service rather than other factors like motivation, another programme or family influences.

The control group must either be selected completely at random or otherwise be very carefully selected to have very similar characteristics. Otherwise, you cannot be sure that any apparent differences in results at the end are not the result of differences that were already there at the start and therefore nothing to do with your intervention.

The difficulty with RCTs...

You need a large sample
RCTs are only meaningful IF there is a large control group with very SIMILAR CHARACTERISTICS to the users (the counterfactual). Scotland is a relatively small nation and behaviour change projects often target small or localised populations, making them hard to carry out.

They can be expensive
Funding may be a barrier since RCTs may be expensive to run and therefore not cost-effective as a means of evaluating small-scale projects

They can't tell you everything
RCTs can't tell you WHY something is effective (or ineffective) so learning anything about HOW a project worked is tricky using this method.

Do impact evaluations even ask the right questions? Contribution not attribution

Example - contribution to achieving outcomes

Behaviour change is complex and you can rarely make a long lasting social change on your own. Say you want to design an intervention to increase the number of families who recycle. You quickly realise that to achieve this long lasting social change in behaviour that you need to work collaboratively with partners - local communities, funders, environmental specialists, a marketing firm, supermarkets and schools. The question then becomes….if we do achieve a change in behaviour which one of us is responsible? The answer is, of course, all of you have a distinctive role in contributing towards achieving the outcome……so shouldn't any evaluation of YOUR service assess the extent of YOUR contribution to achieving the outcomes? Impact evaluations (RCTs) put all the pressure on your service to prove you've improved recycling rather than assess the contribution you are making.

An alternative to RCTs

A "middle ground" approach
Rather than carrying out a small RCT which might be impractical and would only deliver meaningless results, we recommend that small-scale project organisers carry out a 5-step approach to evaluation. This is summarised in the following slides and detailed in the remainder of this pack.

This approach to evaluation is practical for projects of any size but does rely on providers having a clear sense of what they're hoping to achieve and how they're going to get there - a theory of change. For this reason, using the 5-step approach, must begin at the planning stage.

What is evaluation really for?

Although doing evaluation requires the use of techniques and tools, bear in mind that its overall purpose is to help you (re) design services, ask questions, gather evidence, interpret the evidence, communicate important information about your service and take informed decisions. In this sense, the ability to ask relevant questions and clearly communicate the answers at the right time to the right people are key skills in making evaluation useful.


Contact