Formative Evaluation

Experience from Canadian municipalities that are further along in the AFC process suggests that the early stages of carrying out your action plan will involve various stakeholders implementing many small-scale programs or projects, rather than a single ambitious intervention that requires a complex network of collaborations. To increase your chances of success, communicate emerging challenges and lessons between stakeholders.

An efficiently organized formative evaluation is one way to help make this communication possible. More specifically, it will be critical to consider the logical progression of initiatives as outlined in your action plan. This will allow you to determine how you evaluate a current program or set up a project with the organization or individual responsible for the next initiative in the sequence. By doing so, you can address issues that are important to all parties and, more importantly, uncover the lessons that can guide outcomes that better serve the needs of older adults.

Formative evaluations typically use the following questions and data sources; review them to help focus your efforts. For more in-depth guidance on conducting a formative evaluation read Chapter 5, ‘Formative and Process Evaluation,’ in Rosye et al., 2010, and Chapter 9, ‘Implementation Evaluation: What Happened in the Program,’ in Patton, 2008.

Formative Evaluation Questions

  1. What do various stakeholders — participants, staff, administrators, funders — consider important to the program? How similar or different are those perceptions? What is the basis for and what are the implications of different perceptions?
  2. What is the participant and staff feedback about program processes? What is working well and not working so well, from their perspectives?
  3. What challenges and barriers have emerged as the program has been implemented? How have staff responded to these challenges and barriers? What ‘bugs’ do you need to work out?
  4. What original assumptions have been proven true? What assumptions appear problematic? How accurate has the original needs assessment been? To what extent, if at all, are participants’ ‘actual’ needs different from what you planned?
  5. What do participants actually do in the program? What are their primary activities (in detail)? What do they experience? To what extent are those experiences yielding the immediate results or short-term outcomes you desired? Why or why not? In essence, does the model appear to be working?
  6. What do participants like and dislike? Do they know what they are supposed to accomplish as participants? Do they ‘buy into’ the program’s goals and intended outcomes?
  7. How well are staff functioning together? Do they know about and agree on what outcomes they are aiming for? To what extent do they agree with the program’s goals and intended outcomes? What are their perceptions of participants? Of administrators? Of their own roles and effectiveness?
  8. What has changed from the original design and why? Why are adaptations from the original design being made? Who needs to ‘approve’ such changes? How are these changes being documented and reflected on, if at all?
  9. What monitoring system has been established to assess implementation on an ongoing basis and how is it being used?footnote 23

Formative Evaluation Data Sources

  1. Client socio-demographic characteristics
  2. Client service usage (type and amount of services clients received)
  3. Referral sources (referral and co-ordinating agency perspectives of program strengths and weaknesses)
  4. Staff characteristics:
    • Professional degrees
    • Experience
    • Socio-demographics
    • Staff perceptions of program strengths and weaknesses
  5. Program activities:
    • Special events and meetings
    • Staff meetings
    • Training
    • Program protocols, procedures and training manuals
    • Any information to answer the questions: ‘What happens to clients?’ and ‘What is the program?’’
    • Observing program activities: is the program being implemented as it is supposed to be?
  6. Minutes of board, staff and committee meetings
  7. Correspondence and internal memos about the project
  8. Client satisfaction data; client reports of program strengths, weaknesses and barriers
  9. Financial data; program costs and expendituresfootnote 24

Summative Evaluation

As alterations to the built environment are completed and as programs begin to stabilize, evaluate the actual outcomes of your efforts and compare them to the goals you outlined in your action plan. This may be required by an external funding process, or you may need to determine whether an ongoing program calls for increasing local funding.

Chapter 7, ‘Focusing on Outcomes: Beyond the Goals Clarification Game,’ in Patton, 2008, describes in full detail the outcome-based evaluation that this section outlines briefly. This section describes the six elements that are central to this evaluation framework as they relate to the AFC context.

Target Group: The target group for the evaluation includes participants in a program or consumers of a service, or, more generally, individuals likely to benefit from an initiative. In the context of AFC planning, the target group will vary depending on the nature of the intervention. For programs that have been created or adapted, the target group will be program participants, while the target group for an intervention in the built environment will be the users of the space. Don’t over-generalize when defining a target group. It is important for evaluative purposes that all members of the group share a desired outcome, which may not be the case if you make broad distinctions.

Desired Outcomes: The desired outcomes are the anticipated changes that will occur in the target group because of an intervention. Depending on the nature of the AFC program or project, this could include improving health, increasing access to public transit or decreasing feelings of loneliness and isolation. Although they may not exactly predict the outcomes, the explicit goals you developed help define what these outcomes are. Likewise, because the program or project you are evaluating was the focus of a strategy in your action plan, think about the answer to the following question: ‘What positive implications were expected as a result of implementing this strategy?’’

Outcome Indicators: A sign of a desired outcome is one that you can measure in a meaningful way as attaining a particular goal. When measuring your desired outcomes, the best place to start is your needs assessment. Selecting indicators from your needs assessment will ensure continuity between the information baseline you have already developed and the evaluation of your programs and projects. You won’t need or want to re-examine every question in your needs assessment. Use only the questions that relate to the particular intervention. For example, a measurement of a health-related outcome of a new program could be an improvement in older adults’ ability to complete their daily living activities independently.

Data Collection Plan: Developing a plan for collecting data about your indicators involves many of the same details you discussed when you carried out your needs assessment (who will collect the data? How will it be collected? What is the sample and sampling technique? etc.) The difference at this stage is that you should involve the evaluation’s intended user in decisions about the data collection plan. This will foster ownership and credibility, since collecting and reporting information in a format the intended user does not fully understand, or worse, does not trust, has little point.

Description of Results Use: Determining how to use your evaluation results increases the usefulness of your evaluation. Work with the intended user and imagine how to use the results in different scenarios. You can then predict weaknesses in your evaluation design and adjust it, if necessary. Make sure the intended user is focusing on the implications of the results and what actions they would take in the immediate future. For example, if an evaluation determined that a mail-out program was highly successful at informing older adults in a particular neighbourhood about upcoming community events, the intended user might consider expanding the program and begin looking for resources to increase its capacity.

Performance Targets: The measurements you select determine whether you attain the performance targets. For example, a community might aspire to the following target: by 2013, 40 per cent of older adults will be regular users of public transit. The 40 per cent target is completely arbitrary, but when you design performance targets properly, this should never be the case.

Arbitrary targets may promote underachievement, or worse, simply be unachievable. Using past performance — in this case, the results of your needs assessment — is the best way to develop targets that represent a meaningful change and are realistic, given existing resources. Instead of prior local performance measures, look at standards other jurisdictions used, although local circumstances will always influence the usefulness of models you adopted from other jurisdictions.


Footnotes

  • footnote[23] Back to paragraph Royse, D., Thyer, B. and Padgett, D. (2009). Program Evaluation: An Introduction, 5th edition. Belmont, CA: Wardsworth, Cengage Learning.
  • footnote[24] Back to paragraph Ibid.