This study was funded by the National Institute for Health Research (NIHR) [Programme Grants for Applied Research (Grant Reference Number RP-PG-1209-10040)]. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care.

Implementation Guide

 

You can download a full version of this guide, or you can access individual sections below.

This guide is based upon a major research programme, Action to Support Practices Implementing Research Evidence (ASPIRE). The research was led by the University of Leeds and brought together collaborators including the West Yorkshire clinical commissioning groups, patients and the public, and representatives from the National Institute for Health and Care Excellent (NICE). Over 200 general practices from West Yorkshire took part in the research programme.

Introduction
What’s the purpose of this guide?
 
This guide offers practical, evidence-based advice on how to implement evidence-based care in general practice.
 
Clinical research continually produces new evidence that can improve patient and population outcomes. Yet such evidence does not reliably find its way into everyday patient care.  There are well-documented variations in the delivery of evidence-based care which cannot be easily explained away by differences in patient populations (e.g. deprivation levels).
 
NICE guidance promotes treatments of proven benefit and discourages treatments of less value to patients and health services. However, as you already know, getting evidence into practice is generally easier said than done within the everyday constraints and challenges of general practice.
 
What’s our rationale?
 
Like it or not, a lot of good quality research shows that most interventions to change clinical practice have modest effects. However, repeated small changes can make a big difference. We can make a significant contribution to improving population healthcare and health by:
  • general practices combining efforts
  • focusing their attention on ‘high impact’ clinical priorities
  • underpinned by a sound evidence based
  • associated with scope for improvement
It is entirely feasible to achieve major impacts by using existing quality improvement resources effectively and through targeted, cumulative improvements.
 
Who is this guide for?
 
This guide is mainly for people leading improvement across small to large groups of general practices. However, some content may be flexible enough to inform both national initiatives and improvements within single general practices.
 
How can you use this guide?
 
The first, and only, rule of this guide is that there are no rules on how to use it. If you are planning a big improvement across lots of general practices and have sufficient time and resources, you could use this as a step-by-step guide. However, realistically, you will be working within a tight time frame and with limited support. So, you might prefer to jump straight to making a change. This could involve, for example, adapting some of our illustrative audit and feedback resources. We don’t claim that this is a comprehensive guide. Where possible, we have included links to supporting online resources.
 
Who developed this guide?
 
This guide is based upon a major research programme, Action to Support Practices Implementing Research Evidence (ASPIRE). The research was led by the University of Leeds and brought together collaborators including the West Yorkshire clinical commissioning groups, patients and the public, and representatives from the National Institute for Health and Care Excellence (NICE). Over 200 general practices from West Yorkshire took part in the research programme.
 
This study was funded by the National Institute for Health Research (NIHR) [Programme Grants for Applied Research (Grant Reference Number RP-PG-1209-10040)]. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care.
 
 
 
Contents
 
What do we want to achieve?
Setting priorities for change
 
How well are we doing?
Measuring adherence to recommended practice
 
Why aren’t we achieving our goals?
Understanding gaps between current and recommended practice
 
Which approaches can help us change?
Evidence-based approaches to improve practice
 
What action can we take?
Developing a plan of action
 
How can we put our plan into action?
Preparing for the launch
 
How will we know we have improved?
Evaluating impact
 
Examples
Examples of audit and feedback reports and extra resources
 
Ten top tips
1. There is seldom one simple explanation for any gap between evidence and practice.  Obstacles to (and enablers of) change operate at one or more of system, team, professional and patient levels.  Plans to tackle evidence-practice gaps usually need coordinated efforts across different levels.
2. It is unlikely that you will be able to address all barriers.  Focus on those you judge most important and are able to change.
3. Lack of knowledge is seldom the main explanation for evidence-practice gaps.  Consider wider factors such as ‘know-how’ (practical knowledge and skills), recall (being prompted to do the right thing at the right time for the right patient), and having sufficient time and resources (of course).
4. Consider what you can stop doing in order to make more time for the evidence-based practices and actions you really, really want to do.
5. Consider the effectiveness and possible unintended consequences when choosing an approach to change practice.  For example, computerised prompts can help change specific behaviours (such as prescribing or test ordering) and are more likely to work if users need to provide a justification for over-riding recommendations.  But people will circumvent them if they are too intrusive or disruptive.
6. Effective action plans turn long-term goals into small manageable steps; these work best of they are specific, realistic and to the point.
7. Set realistic goals for change which are genuinely achievable, not fanciful.
8. Ensure that any goals for change are within the control of the people who need to change.  That sounds rather obvious but is easily overlooked.
9. Focus on making changes to clinical practice which are supported by the strongest clinical evidence.
10. Making continuous and cumulative improvements in evidence-based care can deliver major improvements in population health.
What do we want to achieve?
 
This is about…
Setting priorities for change
Applicable to level(s)
Single practice      
Network of practices       
Regional or national networks
Likely skills and resources needed
Clinical      Management
Likely difficulty
Likely time commitment
Do…
Apply some criteria to justify your choice
Don’t…
Get hi-jacked by strong views or vested interests
Illustrations
Developing ‘high impact’ guideline-based quality indicators for UK primary care: This is an example from research which illustrates a structured consensus process.
Helpful resources
How NICE prioritises quality standards.
A checklist for prioritising clinical practice recommendations for action.
 
 
Identifying priorities
Many clinical guidelines are potentially relevant to general practice.  Some guidelines address relatively specialist topics but can include one to two key recommendations where actions in general practice play a critical role in patient care pathways.
 
However, there are competing priorities for action, over and above your existing service and clinical commitments.  You need to make choices within finite time and resources.
 
Criteria for identifying priorities include:
  • Strength of evidence underpinning clinical practice recommendations
  • Burden of illness, e.g. prevalence, severity, costs
  • Fit with explicit national or local priorities and initiatives
  • Potential for significant patient benefit, e.g. longevity, quality of life, safety of care
  • Scope for improvement upon current levels of adherence, e.g. from perceived current low levels or unacceptably high variations
  • Feasibility of measuring progress, e.g. from routinely collected clinical data
  • Extent to which following a recommendation is directly within the control of individual practice teams or professionals
  • Likelihood of achieving cost savings without patient harm
  • You might have little or no choice over what to focus on!  There is no shortage of national and local priorities. You will struggle to address all of these at the same time and therefore you could focus, say, on a limited number of clinical practice recommendations selected from on clinical guideline.
Consider:
  • Who needs to be involved as you will require different perspectives and skills, e.g. clinicians, practice support staff, patients and carers, commissioning, public health
  • How high the stakes are.  A one-off, informal meeting will usually suffice for a general practice.  Larger organisations or networks, which need to be accountable and transparent, might consider using a structured consensus process.
How well are we doing ?
 
This is about…
Measuring adherence to recommended practice
Applicable to level(s)
Single practice      
Network of practices       
Regional or national networks
Likely skills and resources needed
Clinical      Administrative     Data collection and analysis
Likely difficulty
Likely time commitment
Do…
Think about what routinely recorded clinical data might already be available
Don’t…
Attempt to construct overly complicated indicators
Illustrations
From research studies:
Variations in achievement of evidence-based, high-impact quality indicators in general practice.
Prescribed opioids in primary care.
High risk prescribing in primary care patients particularly vulnerable to adverse drug events.
Helpful resources
 
 
What is already known about variations in practice?
There are well recognised variations in clinical practice across all healthcare sectors.  The size of these variations can only partly be accounted for by factors such as demographics and case mix.  Where patients are not receiving recommended care and analyses have accounted for differences in patient populations, such variations can be considered inappropriate.
 
We found that the likelihood of patients receiving recommended care or achieving recommended outcomes depended upon which general practice they were registered at.1  For processes of care, there were seven-fold differences in the likelihood of high-risk prescribing (typically involving NSAIDs) and two-fold difference in the likelihood of being prescribed recommended treatment for the secondary prevention of myocardial infarction.  For recommended outcomes, there was a ten-fold difference in the likelihood of achieving blood pressure control in hypertension and a four-fold difference in diabetes control (combined blood pressure, HbA1c and cholesterol targets).  Many of these variations could not be explained away by demographic differences in patient populations (e.g. age, social deprivation) and is likely to be related to differences in clinical behaviour.
 
Some analyses can also highlight particular ‘at risk’ patient groups.  For example, we found that both long-term and strong opioid prescribing were more likely in women aged over 65 years (compared to women under 50 years), missed appointments and increasing levels of polypharmacy.2
 
Indicator development
 
Consider:
  • Whether there are existing indicators or sets of routinely collected data which will be sufficient for your needs, e.g. prescribing indicators, Quality and Outcome Framework (QOF) data.
  • The advantages and disadvantages of measuring processes or outcomes of care (Box 1).
  • The advantages and disadvantages of single or composite (combined) indicators (Box 2).
  • How reliably and accurately coded routinely collected data are.  Some types of data are generally coded reliably in general practice (e.g. prescribing, certain diagnostic tests, diagnoses for patients on disease registers) whilst others are not (e.g. referrals, diagnoses not systematically recorded for disease registers).
Steps in development include:
  • Defining the targeted patient (‘denominator’) population (e.g. all coded type 2 diabetes) or particular sub-populations (e.g. coded type 2 diabetes with recorded poorer control).
  • Defining those (‘numerator’) patients with evidence of a recommended clinical intervention offered or received or meeting defined treatment targets.
  • Deciding whether to collect data to understand any likely variations in practice, e.g. patient demographics, co-morbidities.
  • Developing or adapting existing searches of electronic patient data.
  • Piloting and refining searches prior to large scale data collection.
Data collection
 
Consider:
  • How to include all or sample general practices to ensure the data apply to ‘typical’ practices which have not self-selected.
  • Seeking approval, if required, from general practices for data collection.
  • Adherence to information governance requirements.
Analysis and interpretation
 
What to look for:
  • Overall level of adherence for each indicator; if high there may be no need for further action except for positive feedback; if low or lower than expected, consider further action if room for improvement exists.
  • Patterns of variation between general practices, e.g. can substantial variation confidently be explained away by known differences in practice population demographics?
  • Patterns of variation between any patient sub-groups, e.g. age, gender, co-morbidities.
  • Likely chance variation, especially when dealing with smaller numbers of practices or patients.
  • Unexpected findings to prompt consideration and investigation of plausible alternative explanations, e.g. errors in searches, limitations of coding.
The analysis of variations can help focus action, e.g. on specific groups of general practices or groups of patients.
 
Box 1. Considerations in measuring processes and outcomes of care.3
Process of care indicators
Outcome indicators
Useful if there is strong evidence predicting better outcomes if process of care followed, e.g. reduced stroke risk for anticoagulation in atrial fibrillation
Can assess what are ultimately important to patients, e.g. quality of life
Less useful if patient outcomes not tightly linked to processes of care, e.g. screening or case-finding for depression4
Factors other than healthcare provided may influence outcomes, e.g. co-morbidities
Measurement can help understand variations in patient outcomes, e.g. higher levels of asthma exacerbations might be linked to poorer provision of patient asthma plans5
May need to adjust statistically for casemix to enable fair comparisons between practices
Often available as routinely collected data, e.g. prescribing, test ordering
Intermediate outcomes can help assess responses to treatment, e.g. blood pressure control
 
Box 2. Considerations in using single or composite (combined) indicators.6
Single indicators
Composite indicators
Often simpler to apply, e.g. proportion of people with diabetes whose blood pressure is adequately controlled
Can summarise one or more key aspects of quality of care to help rapid interpretation of indicators, e.g. proportion of people with diabetes who receive all recommended processes of care
Allow detection of specific aspects of care that need attention, e.g. albumin:creatinine ratios in chronic kidney disease
Composite indicators only as good as their underlying single indicators
Why are we not achieving our goals?
 
This is about…
Understanding gaps between current and recommended practice
Applicable to level(s)
Single practice      
Network of practices       
Regional or national networks
Likely skills and resources needed
Clinical      Administrative      Management
Likely difficulty
Likely time commitment
Do…
Consider the range of individual, team and organisational level factors that can influence clinical care
Focus on identifying the most important factors that you can change
Don’t…
Assume that lack of knowledge is the main explanation for evidence-practice gaps
Illustrations
From research studies:
A qualitative study to understand adherence to multiple evidence-based indicators in primary care.
A qualitative study to understand long-term opioid prescribing for non-cancer pain in primary care.
A systematic review of barriers to effective management of type 2 diabetes in primary care.
Helpful resources
There are many frameworks which set out various ways of grouping factors that influence practice. Some are rather detailed but this sample illustrates a range of approaches:
A checklist for identifying determinants of practice (see Table 1).
 
 
Barriers and enablers
Every clinician and manager knows that changing clinical practice is seldom easy. Change generally takes time, effort and supporting resources. In planning change, you may find it useful to identify and think about barriers to and enablers of change. Then you can consider which of these are important and are feasible to address, or too difficult within limited time and resources. You may decide that the effort-reward ratio is too unfavourable to prioritise a given change and therefore choose to tackle a different priority. (Luckily, there is no shortage of priorities to address in primary care.)
 
Frameworks to help understand behaviour and guide behaviour change
Frameworks can act as prompts to identify influences on clinical practice. They can help you consider factors that you might otherwise not have thought of. There is quite a variety of frameworks and they all tend to overlap. There is no evidence that one framework is any better than another. The choice largely comes down to whichever you find easiest or most intuitive to use.
 
Table 1 is adapted from an interview study of primary care staff, which used one framework to understand barriers to and enablers of adherence to a set of evidence-based indicators.7 The Theoretical Domains Framework is useful because it focuses on beliefs, attitudes and so forth that you can potentially change.8
 
 
Methods to explore barriers and enablers
There are a number of ways to influences on practice. How intensive this needs to be inevitably depends on judgment and resources available. For example, you may already have a good working knowledge of factors that influence the care of common clinical priorities, such as diabetes or hypertension. However, you might still find it useful to set out the most important enablers of and barriers to recommended practice before deciding what action to take. The key is to ensure that those targeted by any planned change are involved and agree upon the main barriers and enablers. Table 2 summarises some approaches you could consider.
 
 
Making sense of barriers and enablers
Consider prioritising for action:
  • Those which are most important, e.g. frequently encountered, pivotal steps in patient pathways
  • Those with strongest consensus amongst team members
  • Those most amenable to change, e.g. staff beliefs and processes of care as opposed to structures and wider environmental factors
  • Those which can be readily linked to one or more approaches to change practice
Which approaches can help us change?
 
This is about…
Evidence-based approaches to improve practice
Applicable to level(s)
Single practice      
Network of practices       
Regional or national networks
Likely skills and resources needed
Clinical      Management
Likely difficulty
Likely time commitment
Do…
Accept that most approaches to improvement practice have modest effects which can accumulate if used consistently over time to produce a significant impact
Don’t…
Waste time on complicated and costly improvement fads 
Illustrations
Education, informatics, and financial incentives for safer prescribing.
Pharmacist-led feedback, educational outreach support for safer prescribing.
Feedback to high antibiotic prescribers.
Brief educational messages for diabetes.
A review of audit and feedback.
Helpful resources
Recommendations on audit and feedback.
Examples of audit and feedback.
 
 
A range of approaches can support changing practice. You will be familiar with most if you are on the receiving end of initiatives to improve practice. They include approaches like education, computerised prompts and reminders and financial incentives.
 
Considerations in selecting approaches:
  • Strength of evidence. Some approaches have a stronger evidence-base than others.  For example, audit and feedback has been tested in randomised trials many times across a range of settings and clinical topics. Whilst there are no guarantees it will work consistently for a given problem, there are ways to improve the chances of success – such as providing repeated rather than one-off feedback and including explicit action plans with feedback. In contrast, there is a much more limited evidence base on financial incentives, suggesting that you should use this approach with caution.
  • The nature of the implementation problem. You need to apply some judgment in deciding which improvement approaches may work best for a given clinical problem. For example, computerised prompts can reduce errors of omission in prescribing decisions. However, they are less likely to work when tackling more complex issues, such as counselling patients or reducing emergency readmissions.
  • Fit with available resources and skills. You need to make the best use of existing resources, such as practice pharmacists in auditing prescribing and educating the team.
  • Unintended consequences. Some approaches may not work as intended or even have undesired side effects. For example, feedback on clinical performance showing a large gap between actual and recommended practice can be demotivating, or prescribing safety prompts which appear on-screen after you have made a clinical decision and counselled a patient on treatment can de-rail a consultation.
  • The balance of costs and benefits. The effects of interventions may not always pay for themselves. For example, for educational outreach visits to reduce prescribing, the costs of educator and staff participation time may eclipse any savings. However, if the same approach of education outreach was even only modestly successful in improving your practice’s use of clinically effective strategies to promote weight loss or reduce smoking, the longer term population health benefits could outweigh the upfront costs.
  • Single versus combined approaches. It is often possible to combine different approaches to improve practice, for example, educational outreach with audit and feedback. In some cases this can make sense if the approaches are complementary, e.g. if the outreach meetings aim to reinforce action planning following feedback.  However, combined approaches can be more costly. Furthermore, there is no convincing evidence that combined approaches are more effective than single approaches – although this may be because evaluators have ‘thrown in the kitchen sink’ in efforts to address more difficult improvement problems.
Table 3 summarises some key evidence and considerations in choosing improvement approaches. Table 4 sets out 15 suggestions for effective feedback based upon evidence synthesis and interviews with experts.9 Approaches to improve practice generally have modest impacts. Such modest impacts might be worthwhile because:
  • Effects are in the range, if not better, than those of many recommended clinical treatments.
  • Effects can be worthwhile in relation to costs of improvement approaches.
  • Effects of improvement approaches can be complementary and cumulative over time.
 
What action can we take?
 
This is about…
Developing a plan of action
Applicable to level(s)
  
Network of practices       
Regional or national networks
Likely skills and resources needed
Clinical         Management
Likely difficulty
Likely time commitment
Do…
Think logically about how you might link different barriers to and enablers of best practice to improvement approaches
Don’t…
Make this more complicated than you really need to
Illustrations
This is how we developed an approach to change practice. It is fairly complex because it was used for research purposes.
This study is from secondary care but shows how an approach to change practice was developed based upon barriers and enablers.
Helpful resources
This is a list of 93 behaviour change techniques20: We do not suggest that you learn it! However, you might wish to look through if you are looking for new ways to help change the behaviour of health professionals (or patients).
 
 
Earlier sections addressed ‘Why aren’t we achieving our goals?’ and ‘Which approaches can help us change?’ This section brings these together and considers how to develop an improvement package comprising one or more approaches to improvement based upon identified barriers and enablers and available resources.
 
Considering behaviour change techniques
Approaches to change practice can work in a number of different ways. For example, educational outreach visits can include various combinations of ‘active ingredients:’ being delivered by a credible source; shaping knowledge about a clinical topic; highlighting the positive (and negative) consequences of following a guideline recommendation (or not); providing comparative feedback on clinical practice; and developing an action plan for the practice.
 
These active ingredients, or behaviour change techniques,20 can be useful in designing interventions:
  • Developing approaches to improve practice can sometimes become complicated and challenging within limited timelines and resources. Behaviour change techniques offer a checklist of active ingredients to consider.
  • Behaviour change techniques can be linked to different barriers and enablers. For example, limited abilities to recall all relevant clinical information when making a prescribing decision can be helped by prompts and reminders. There is no rule book (yet) on how to match behaviour change techniques to barriers and enablers; some degree of judgment is usually needed.
  • Different improvement approaches can include similar behaviour change techniques.  For example, audit and feedback can also include all or most of those mentioned earlier for educational outreach visits. This is useful to bear in mind if resources are available for audit and feedback but not for educational outreach visits. Therefore, it may be possible to deliver similar active ingredients but within different improvement approaches. However, if you are using more than one improvement approach (e.g. both educational outreach visits and audit and feedback), some degree of duplication may help reinforce any critical behaviour change techniques.
Building approaches to improve practice
Key considerations in developing approaches to improve practice:
  • Known evidence of effectiveness of the improvement approach (e.g. educational meetings), including what factors are likely to make them more, or less, effective
  • Known barriers to and enablers of improvement
  • Available resources and skills (e.g. routinely collected data for audit and feedback, skills in designing computerised prompts)
  • Likely feasibility – how confident you are that the approach will work as intended
Table 5 illustrates how to combine the various components of an improvement approach.
 
Table 5. Illustrative components of an improvement approach
Barriers and enablers
Behaviour change techniques
Evidence-based approaches
Audit and feedback
Educational outreach visits
Computer prompts
Limited awareness or recall of treatment goals
Inform and prompt recall of clinical goals
 
 
Limited awareness of clinical benefit
Emphasize positive consequences of changing clinical practice (and negative consequences of not doing so)
 
 
Limited insight into scope for improving practice
Comparative feedback
 
 
Inability to recall all relevant clinical information at time of consultation
Triggered prompts and reminders
 
 
 
Risk of good intentions to change fading
Action planning
 
 
 
 
Piloting and refining your improvement approach
An improvement approach may look good on paper but one or more rounds of piloting and refinement are likely to help before it goes ‘live.’ This is particularly important if you are scaling up for a network of practices.
 
Suggestions for pilot work:
  • Meet with practice staff, in a group or individually, your improvement approach is designed to help. Ask them to think aloud as they work through any instructions, processes or materials. Let them know that you particularly want to hear about problems that they might think that you don’t want to hear! Ask if they can suggest any solutions to these problems.
  • Then probe people on (how likely is it to work in real life, seriously?), coherence (does the overall improvement approach make sense to them?), comprehensiveness (are all of the most important barriers addressed?) and fit (are there opportunities to embed the intervention within existing routines and resources?)
  • Make adjustments as you proceed. If this is important enough, it is worth investing time in further meetings to get it right.
  • Pilot the whole improvement approach or its separate components (e.g. computerised prompts) in a small number of practices. Again, actively probe for issues, especially around feasibility and fit with routines and resources.
How can we put our plan into action?
 
This is about…
Preparing for the launch
Applicable to level(s)
 
Network of practices       
Regional or national networks
Likely skills and resources needed
Clinical      Administrative      Management
Likely difficulty
Likely time commitment
Do…
Consider whether you have the commitment and resources to embed changes within your practice or network
Don’t…
Choose a launch period that clashes with competing initiatives or known busy periods
 
Preparing for roll out
Some practical considerations:
  • Timing to avoid interference (or even align) with any other major initiatives or known peak periods (e.g. winter flu)
  • Whether to go for a phased or ‘big bang’ start; the former is suitable if you have limited resources and allows more for continuing refinement following feedback whilst the latter allows clarity around a launch date
  • Whether this is a one-off campaign or you can embed and sustain your improvement approach
Fidelity checklist
Fidelity is the degree to which a plan is followed as intended. One common reason for improvement approaches not achieving hoped for impacts is loss of fidelity. There are different ways to look at fidelity, which can be considered throughout the planning stages and subsequent evaluation.
  • Is the approach designed as intended, i.e. to address all or most major known barriers by embedding relevant behaviour change techniques?
  • Are those responsible for delivery sufficiently trained, e.g. are staff delivering educational outreach visits trained to a sufficient standard, or are those people nominated as local opinion leaders ‘on message?’
  • Are arrangements in place to ensure that the improvement approach can be delivered on time to all practices and staff targeted?
  • Do targeted practices and staff actually receive all components of the improvement approach?
  • Do targeted practices and staff actually take any subsequent action prompted or supported by the improvement approach?
It is highly unlikely that all of these will go as planned. It is useful, however, to build in planned time for adjustments and running repairs to the design and roll out of the improvement approach
How will we know we have improved?
 
This is about…
Evaluating impact
Applicable to level(s)
Single practice      
Network of practices       
Regional or national networks
Likely skills and resources needed
Clinical      Management     Data collection and analysis
Likely difficulty
Likely time commitment
Do…
Remember that cumulative, small changes can make a big difference
Don’t…
Over-complicate your evaluation
Illustrations
Here is a simple audit of asthma plans carried out at one practice in Leeds.
Please send us any examples of quality improvement projects and clinical audits you would like to share.
If you are interested in research and want to see what a rigorous, ‘real world’ randomised trial looks like, see the randomised trial findings from ASPIRE21.
General practices were randomly assigned to receive an implementation package targeting diabetes control or risky prescribing (Trial 1); blood pressure control or anticoagulation in atrial fibrillation (Trial 2). The main outcomes were respectively: achievement of all recommended levels of haemoglobin A1c, BP, and cholesterol; risky prescribing levels; achievement of recommended BP; and anticoagulation prescribing.
The implementation package produced a significant clinically and cost-effective reduction in one target only: risky prescribing. We concluded that an adaptable implementation package was cost-effective for targeting prescribing behaviours within the control of clinicians, but not for more complex behaviours that also required patient engagement. Given known associations between risky prescribing combinations and increased morbidity, mortality, and health service use, a scaled-up risky prescribing implementation package could have an important population impact.
Helpful resources
 
What is the aim of evaluation?
The main aim of an evaluation is to find out whether the improvement approach achieved its intended goals. This will involve measuring any change in the processes of care, in patient outcomes, or both. There also are opportunities to address other evaluation questions, such as why the approach worked (or not) and how can it be improved or adapted for another problem.
 
Whilst this manual may also be of interest to those planning improvements as part of a research project, with the aim of generating new, generalisable knowledge, it does not cover research designs. There are resources available to understand and guide research evaluations.3 22-26
 
Did the improvement approach work?
Essentially, this involves conducting an audit cycle to assess any differences in care or outcomes before and after the improvement approach. Considerations include:
  • Agreeing key outcomes in advance
  • Using the same method to collect and analyse data before and after implementation of the improvement approach
  • Timing of data collection to capture any short term or longer term impacts – processes of care are more likely to change before patient outcomes
No battle plan ever survives contact with the enemy.
Helmuth von Moltke the Elder
 
Why did the improvement approach work (or not?)
There are many explanations as it why improvement approaches don’t work as planned. Possible explanations include:
  • Unrealistic expectations about predicted or hoped for effects
  • Loss of fidelity (‘How can we put our plan into action?’)
  • Timing of data collection – did you miss any transient but important early effects, or is it too early to detect any important longer term impacts
  • The data collected did not capture effects (although beware of rationalising too much after the event)
There are a number of ways to get an indication of why an improvement approach did or did not work as planned. These are similar to methods outlined earlier in ‘Why aren’t we achieving our goals?’
 
Deciding the next step
If the improvement approach largely worked as planned, you will need to decide whether to continue or repeat it in order to maintain your achievement. Having learned from this experience, you may also wish to move on and select the next priority to tackle…
Examples
Below you can find resources and examples of audit and feedback reports that were produced as part of the following trials:
(Copyright: The University of Leeds)​
 
Action to Support Practices Implement Research Evidence (ASPIRE)
The aim of ASPIRE was "to evaluate the effectiveness and cost-effectiveness of a multifaceted, adaptable intervention package to implement four targeted, high impact recommendations in general practice."
 
Atrial fibrillation: Report 1, Report 2, Report 3, Report 4, Action Plan
 
Extra resources: 
The Campaign to Reduce Opioid Prescribing (CROP)
CROP was designed to assist general practices with opioid deprescribing in an effort to improve patient care and safety across the region. 
 
CROP: Report 1
References
1. Willis TA, West R, Rushforth B, et al. Variations in achievement of evidence-based, high-impact quality indicators in general practice: An observational study. PloS one 2017;12(7):e0177949.
 
2. Foy R, Leaman B, McCrorie C, et al. Prescribed opioids in primary care: cross-sectional and longitudinal analyses of influence of patient and practice characteristics. BMJ open 2016;6(5):e010276.
 
3. Brown C, Hofer T, Johal A, et al. An epistemology of patient safety research: a framework for study design and interpretation. Part 3. End points and measurement. Qual Saf Health Care 2008;17:170-77.
 
4. Gilbody S, Sheldon T, House A. Screening and case-finding instruments for depression: a meta-analysis. Cmaj 2008;178(8):997-1003.
 
5. Powell H, Gibson PG. Options for self-management education for adults with asthma. The Cochrane database of systematic reviews 2003(1):Cd004107.
 
6. Guthrie B. Measuring the quality of healthcare systems using composites. BMJ 2008;337:a639.
 
7. Lawton R, Heyhoe J, Louch G, et al. Using the Theoretical Domains Framework (TDF) to understand adherence to multiple evidence-based indicators in primary care: a qualitative study. Implementation science : IS 2016;11:113.
8. Atkins L, Francis J, Islam R, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implementation science : IS 2017;12(1):77.
 
9. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice Feedback Interventions: 15 Suggestions for Optimizing Effectiveness. Ann Intern Med 2016.
 
10. Giguere A, Legare F, Grimshaw J, et al. Printed educational materials: effects on professional practice and healthcare outcomes. The Cochrane database of systematic reviews 2012;10:Cd004398.
 
11. Forsetlund L, Bjorndal A, Rashidian A, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. The Cochrane database of systematic reviews 2009(2):Cd003030.
 
12. O'Brien M, Rogers S, Jamtvedt G, et al. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2007(4):Art. No.: CD000409. DOI: 10.1002/14651858.CD000409.pub2.
 
13. Flodgren G, O'Brien MA, Parmelli E, et al. Local opinion leaders: effects on professional practice and healthcare outcomes. The Cochrane database of systematic reviews 2019;6:Cd000125.
 
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. The Cochrane database of systematic reviews 2012;6:Cd000259.
 
15. Shojania K, Jennings A, Mayhew A, et al. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database of Systematic Reviews 2009(3):CD001096. DOI: 10.1002/14651858.CD001096.pub2.
 
16. Roshanov PS, Fernandes N, Wilczynski JM, et al. Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. Bmj 2013;346:f657.
 
17. Flodgren G, Eccles MP, Shepperd S, et al. An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. The Cochrane database of systematic reviews 2011(7):Cd009255.
 
18. Fonhus MS, Dalsbo TK, Johansen M, et al. Patient-mediated interventions to improve professional practice. The Cochrane database of systematic reviews 2018;9:Cd012472.
 
19. Khalil H, Bell B, Chambers H, et al. Professional, structural and organisational interventions in primary care for reducing medication errors. The Cochrane database of systematic reviews 2017;10:Cd003942.
 
20. Michie S, Richardson M, Johnston M, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med 2013;46(1):81-95.
 
21. Willis TA, Collinson M, Glidewell L, et al. An adaptable implementation package targeting evidence-based indicators in primary care: A pragmatic cluster-randomised evaluation. PLOS Medicine 2020;17(2):e1003045.
 
22. Eccles M, Grimshaw JM, Campbell M, et al. Research designs for studies evaluating the effectiveness of change and quality improvement strategies. Qual Saf Health Care 2003;12:47-52.
 
23. Brown C, Hofer T, Johal A, et al. An epistemology of patient safety research: a framework for study design and interpretation. Part 1. Conceptualising and developing interventions. Qual Saf Health Care 2008;17:158-62.
 
24. Brown C, Hofer T, Johal A, et al. An epistemology of patient safety research: a framework for study design and interpretation. Part 2. Study design. Qual Saf Health Care 2008;17:163-69.
 
25. Brown C, Hofer T, Johal A, et al. An epistemology of patient safety research: a framework for study design and interpretation. 4. One size does not fit all. Qual Saf Health Care 2008;17:178-81.
 
26. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) Statement. Bmj 2017;356:i6795.

We would love to hear from you! To provide us with feedback please click here and complete the feedback form.