Accelerating Best Practices in Peer Support Around the World

Program Development Guide

Planning Your Evaluation

In this Chapter:

1. Selecting an Evaluation Type

2. Planning Your Evaluation

3. Identifying Indicators and Measures

4. Cost-Effectiveness Analysis and Business Case

5. Data Collection

 

Planning Your Evaluation

 

Evaluation Models

 

RE-AIM Model

Key Considerations when Starting Your Evaluations
  • Make sure your organization has minimum functioning systems and infrastructure for the evaluation.
  • Provide staff training and tools they need to conduct the evaluation.
  • Use a team-based approach to prioritize improvements and implement them. Involve each staff in the evaluation process in some way.
  • Develop and agree on how the evaluation will be implemented, who will lead the process, and how it will be started.
  • Involve participants in the evaluation process since they may bring valuable ideas based on their experience in receiving peer support service.

The RE-AIM evaluation framework helps practitioners and interventionists understand the relative strengths and weaknesses of different approaches to health promotion, chronic disease self-management, and behavior change interventions.

The RE-AIM framework emphasizes the importance of evaluating the following five dimensions:

  • Reach to the target population;
  • Efficacy or effectiveness of the intervention;
  • Adoption by target settings or institutions;
  • Implementation of delivery of intervention; and
  • Maintenance of intervention effects in individuals, populations, and settings over time.

For more information, Russell E. Glasgow’s presentation on the RE-AIM model introduces the basics of and key issues addressed by the RE-AIM model.

 

Logic Model

A logic model presents a visual representation of how an intervention’s activities will bring about change and work towards achieving the intended goals of intervention. Logic models are intended to provide direction and clarity by presenting the big picture of desired change, along with smaller but important details, such as program activities. The main components of a logic model include:

  • Purpose – what problems or opportunities are being addressed by the intervention?
  • Context – what conditions or settings will the intervention take place?
  • Inputs – what type of resources are necessary to conduct the intervention and what types of constraints exist?
  • Activities – what are the components of the intervention that will work towards the end goal?
  • Outputs – what evidence is there that the activities were performed as planned?
  • Effects/ Impact – what kinds of changes came about as a direct or indirect effect of the activities?
Figure 7. Key components of a logic model

Figure 7. Key components of a logic model

 

PRISM Model

The PRISM model (Practical, Robust Implementation and Sustainability Model) helps to identify factors needed to successfully implement researched-tested interventions in practice settings and to measure implementation success. The conceptual framework examines how the following domains interact and influence intervention effectiveness:

  • Intervention design
  • External environment
  • Organizational characteristics
  • Recipients (organizational and population based)

 

CDC Framework for Program Evaluation

CDC’s Evaluation Framework uses a set of 6 steps and 4 groups of standards in order to provide a systematic way to approach and answer questions about quality, value and importance of a public health program.

Although the evaluation steps are designed based on a subsequent progress, the six connected steps together can be used as a starting point to tailor an evaluation for a particular program. These steps include:CDC Evaluation

  1. Engage stakeholders,including those program implementers, and primary users of the evaluation.
  2. Describe the program,including the need, expected effects, activities, resources, stage, context, and logic model.
  3. Focus the evaluation design, including the purpose, users, uses, questions, methods and agreements.
  4. Gather credible evidence to strengthen evaluation judgments and the recommendations that follow (e.g., indicators, sources, quality, quantity and logistics).
  5. Justify conclusions by linking them to the evidence gathered and judging them against agreed-upon values or standards set by the stakeholders.
  6. Ensure use and share lessons learned with these steps: design, preparation, feedback, follow-up and dissemination by using this checklist

Quality of an evaluation process can be assessed based on 30 standards which are organized into the following four standards groups are:

  1. Utility standards ensure that an evaluation will serve the information needs of intended users.
  2. Feasibility standards ensure that an evaluation will be realistic, prudent, diplomatic and frugal.
  3. Propriety standards ensure that an evaluation will be conducted legally, ethically and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results.
  4. Accuracy standards ensure that an evaluation will reveal and convey technically adequate information about the features that determine worth or merit of the program being evaluated.

Centers for Disease Control and Prevention. Program Performance and Evaluation Office (PPEO). A Framework for Program Evaluation. Available from: http://www.cdc.gov/eval/framework/

 

Quality Improvement Models

 

Rapid Cycle Improvement Model

The Rapid Cycle Improvement model (also called the Plan-Do-Study-Act Model (PDSA) is an iterative, four-stage problem-solving model used for improving a process or carrying out change. It allows program managers to test change quickly on a small scale, see how it works, and refine the change as necessary before implementing it on a broader scale. The model includes four steps:

  • Plan – define the intervention objective, create an intervention timeline, identify how data will be collected
  • Do – carry out intervention, document the process and observations
  • Study – analyze data, summarize and reflect on lessons learned
  • Act – determine intervention modifications, test intervention again

 

FADE Model

The FADE model is a four-step QI cycle that includes Focus, Analyze, Develop, and Execute:

  • Focus – narrow a list of problems to one and verify the problem to be improved
  • Analyze – collect and analyze data to establish baselines, determine “influential factors,” and identify possible solutions
  • Develop – develop action plans for improvement based on collected data, including implementation, communication, and measuring/ monitoring.
  • Execute – put the plan into action on a pilot basis, and monitor its effect

 

© 2015 | Peers for Progress

e-Newsletter Signup
Thank you!

You have successfully subscribed to the Peers for Progress Newsletter.

To unsubscribe, click the unsubscribe link at the bottom of any e-Newsletter email.

Sorry, there was a problem.

We're sorry but there was a problem processesing your submission. Please try submitting again. If the problem persists, please contact us.

Please use this form to be added to the Peers for Progress e-Newsletter mailing list. Be the first to receive the latest news and resources on program development, state-of-the-art research, and networking opportunities.

Previous newsletters may be found at News & Events > Peers for Progress Newsletters.