Module 3 – implementation phase 3: launch and initial use
Module objectives
The purpose of this module is to guide implementers through the initial launch and use of the ChYMH Suite tools. This stage is all about problem solving – turning the plans you have created in the previous module into action and making minor adjustments as needed. A strong focus on clinical use in this phase is needed to encourage staff to see the value in the tools as they start using them. With this module you will:
- Learn about the “Go Live!” date, how to have a successful launch, and the role of the Implementation Team at launch time.
- Learn about the roots of staff resistance to change so you can pre-emptively address it.
- Develop an understanding of your clinical staff’s learning journey and support needs as they begin to use the ChYMH Suite in practice.
- Learn how you can apply ChYMH Suite data for clinical purposes.
- Ensure that your organization is valuing and using the planning tools presented in Module 2.
3.1 Launching tool use at your organization
Go live!
The “Go Live!” date marks the day that clinical staff begin to use one (or more) of the ChYMH Suite assessments with the children and families they support.
Celebrate it when it comes! This is the official launch day that your Implementation Team and staff have been working hard to prepare for. Make an effort to recognize this milestone.
Launch is a process not an event. Your “Go Live!” may be a single date, but don’t judge your success by how things go on day one.
This is a vulnerable time for your organization’s implementation. Changes in processes and roles are happening at multiple levels such as assessor, leadership, and stakeholders. These changes will not occur simultaneously or evenly across the organization, which can create feelings of destabilization.
It’s so important it’s worth repeating: give it time and be patient as you move through this stage. New skills, new clinical practices, changes in organizational capacity, changes in organizational culture, as well as new team functions and processes all require time to mature and move through the “initial awkward stage.”
Tips for success
- Plan review periods. Stick to the plan in the beginning but build in review periods to engage with staff about how things are working. This will support thoughtful changes and prevent “knee-jerk” reactions.
- Make and track goals. For example, let assessors know how many assessments are expected to be completed within the first weeks.
- Integrate supervision. Incorporate reviews of completed assessments into supervision meetings between staff and their managers. This ensures assessment completion and supports assessors as they begin using assessment results in their daily work.
- Know that change happens because people make it happen. Remember that virtually every person working on the ChYMH Suite implementation initiative still has their pre-existing work to do. Ensuring people have the time and resources to learn and implement the assessment(s) with confidence will be imperative to your success.
- Ride the curve. ChYMH Suite tools have a learning curve. Leadership should be generous with assessors’ time at launch, including the time required to complete an assessment. Reassess time needs within 3 months to get a more accurate understanding of resource impacts.
Role of the implementation team
The Implementation Team has been working hard up to this moment, but it isn’t time to rest yet!
To support a successful launch, the role of the Implementation Team is to:
- Ensure all resources are in place
- Ensure staff know what to do (including timelines, expectations) and who to go to with questions and concerns
- Be available for feedback, questions, and to problem-solve issues as they come up
- Be aware of staff who are opposed to the change, and actively work to stop the spread of negativity
In essence, the Implementation Team needs to be engaged with the launch in order to steady the ship, support the crew, and, importantly, not let anyone go overboard!
Pilot or soft launch versus hard launch
A frequent consideration when launching the ChYMH Suite is whether to launch across all of the implementing programs right from the start, or to start with one program as a pilot or “soft launch”. The rationale for the soft launch is usually that it would offer a “safety net”, allowing the organization an opportunity to try implementation on a smaller scale, learn from their mistakes and process issues, and then be better positioned to launch across the rest of the facility. Another version is a “stepped launch” where implementation begins with one program, then another, and another, and so until full implementation has been achieved.
Although there are benefits to a phased approach, in our experience a full launch at Go Live supports more successful results. Consider that a phased approach can add a significant amount of time to your implementation.
- Assessors generally experience a significant learning curve. On average, new assessors take twice as long to complete the assessments as they will once they’ve become accustomed to the tool – but this skill-building phase can easily take 2-3 months (depending on how frequently assessments are being completed).
- Time should be added between a pilot effort and a full implementation to assess the evaluation data and make revisions to the process where needed.
If a full launch occurs during the height of a soft launch’s challenges staff may feel demoralized and anxious rather than excited and committed to the change.
Pilot | Pros | Cons |
---|---|---|
Pilot/Soft Launch |
|
|
Hard Launch |
|
|
3.2 Understanding and avoiding resistance to change
It can be very frustrating as an implementer to have staff resist the change that you are working so hard to bring about. Research has shown that staff resistance to change is frequently the largest obstacle that leaders face when initiating a change effort. The following table (adapted from Ryerson, 2011) links common reasons for staff resistance to change to best practices strategies to address these issues.
Underlying reasons | Strategies to address resistance |
---|---|
Employees feel that they will suffer from the change. | Use a communication strategy that solicits employee input.
|
The organization does not communicate expectations clearly | Do not send mixed signals regarding change; this will increase employee distrust.
|
Employees perceive more work with fewer opportunities. | Communicate a clear vision of the change and provide timely education.
|
Change requires altering a long-standing habit or way of thinking | Identify employee concerns and unresolved implementation issues.
|
Leadership/staff relationships harbour unresolved resentments | Provide staff with a timeline, a defined approach and expected outcome.
|
Staff feel concerned about job security. | Communicate how employees will benefit from the change.
|
The organization lacks adequate reward processes and resources. | Develop procedures to support and reward employees who will be negatively affected by the change.
|
In our experience, individuals who directly manage and supervise assessors are best positioned to manage resistance, but only if they have been provided the proper knowledge and tools to do so. Some staff may be habitually uncomfortable with change, while others may express resistance to the ChYMH Suite in particular. Although this can be challenging to manage, it is important to make a conscious effort to provide a safe environment in which staff are comfortable expressing themselves. Using change management techniques and engaging clinical staff as champions will create conditions that allow for resistance to be expressed, addressed, and ultimately resolved.
3.3 Supporting competency development
The desired result of all of the work done during training and in the planning stage is reliable and accurate information that can be confidently used to support clinical work. This means that a lot depends on both the assessors who are collecting information and the trainers who train them.
Competency is not achieved through the single step of training. Classroom experience does not result in mastery – it prepares for the journey toward it.
Assessors
In the initial days and weeks following the “Go Live!” date, new assessors will experience a significant learning curve. Although they would have been warned during training that the first few assessments will take extra time and may feel awkward or cumbersome, there is a difference between knowing to expect challenges and experiencing them. Assessors will need support and reassurance as they gain experience and become accustomed to this new way of doing their work.
Trainers
New trainers will require support as they learn and gain experience delivering training. In addition to developing an understanding of the assessment(s), trainers must focus on gaining familiarity and comfort with the curriculum, managing the classroom, and “connecting the dots” for clinical staff to help them understand how the tools can support and improve service delivery at your organization.
All clinical staff
All clinical staff, whether they are assessors or not, will need to develop familiarity with the assessment results and develop competency in using the results to inform and support clinical work. For clinical staff who are not also assessors, this is their primary learning task. For assessors, this may come second to first developing skill in administering assessments.
Based on our extensive experience guiding implementation and delivering training, CPRI has developed checklists to guide leaders and trainers as they support staff’s competency development.
This resource can be used during the Launch and Initial Use phase to support early competency development, and can also be used in the future to troubleshoot ongoing issues.
3.4 Clinical use and embedment
Keeping the focus on clinical usefulness
A key factor in the success of your ChYMH Suite launch is making sure that the tools get used in clinical practice. The ChYMH suite of tools was designed primarily to support treatment planning for children and youth by providing a clear understanding of their needs. It is easy to lose this focus in the details of planning and launching your implementation effort, which is why the Implementation Team and leadership must work hard to keep the focus on the clinical usefulness of the ChYMH Suite in the early days of learning and using the tools.
Remember:
- Clinical staff will value a tool that is useful to them personally over a tool that is considered primarily useful to leadership.
- Clinical staff will value a clinical tool over a tool that is “just for data”.
- Assessors who understand how the information they collect with be used and valued will in turn value accuracy in their assessments. The opposite is also true: assessors who do not see the information from their assessments getting used to help clients will not see the point in gathering accurate and quality information in their assessments.
Being explicit about your focus on the clinical use of the tools and the information they provide is the best way to ensure a strong launch that will lead to quality data and sustainable, long-term use of the ChYMH Suite.
How the suite is clinically useful
Each assessment in the ChYMH Suite has a set of embedded clinical status and outcome measures. The full assessments (i.e., the ChYMH and ChYMH-DD) have a broader range of measures designed for various clinical purposes, whereas the Screener+ has fewer embedded measures, the majority of which are designed to support initial triage and service determination decisions.
These measures fall into three types:
Scales | Scales from the ChYMH Suite provide a summary of a young person’s current needs. Scales are useful for measuring severity and monitoring progress. Selected scales have cut-points available. Additional scales beyond those in the interRAI standard suite were created by CPRI for use in Ontario. |
---|---|
CAPs | Collaborative Action Plans (CAPs) are short, internationally peer reviewed documents focused on areas of potential risk. Within each area of risk, guidelines and best practices that evidence has shown may be helpful for the specific needs of the client are presented. CAPs are designed for clinical staff and can support planning by identifying strategies likely to produce change in the child or youth. CPRI has developed caregiver versions of the CAPs known as CCAPsfor use in Ontario. |
Screening algorithms | These are specifically designed to support clinical staff with decision-making at initial screening. Additional decision-support algorithms beyond those in the interRAI standard suite were created by CPRI for use in Ontario. |
For information about interRAI scales, CAPs or Screening Algorithms, go to www.interRAI.org
For information about CPRI-developed scales, CCAPs or Screening Algorithms, contact CPRI.
Overview of ChYMH suite embedded measures and decision-support tools and their clinical purposes
Name | Embedded within: | Decision-support at screening | Immediate safety planning | Measuring clinical status | Flagging immediate needs | Progress monitoring | Outcome evaluation | Care planning using evidence-informed treatment guidelines |
---|---|---|---|---|---|---|---|---|
Scales |
| ![]() | Limited – some scales reflect safety needs | ![]() | ![]() | ![]() | ![]() | ![]() |
CAPs |
| ![]() | ![]() | ![]() | ![]() | Limited – many CAPs are not sensitive to change within treatment period | Limited – many CAPs are not sensitive to change within treatment period | ![]() |
interRAI Screening Algorithm (ChAMhPs) |
| ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
Supplementary Screening Algorithms (Risk Algorithm & Case Complexity Matrix) developed by CPRI and partners |
| ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
Using scales for clinical work
You need to know:
- Scales are primarily useful for measuring severity and progress.
- Some (but not all) scales have clinical cut-offs. Clinical cut-offs can be used to measure levels of client need based on which of four risk categories a client’s scale scores fall into (low, moderate, high, very high). The level of need a client presents with can be useful in determining the focus, frequency and intensity of intervention. For detailed information about the scales, go to www.interRAI.org
- Scales can be categorized under three types: Short-term Clinical Presentation, Long-term Clinical Presentation, and Problem Flags.
- Short-term Clinical Presentation scales have a short-term focus and work well for evaluating change. These scales focus on what the child or youth is doing.
- Long-term Clinical Presentation scales examine the child/youth’s needs more broadly and are less sensitive to change than Short-term Clinical Presentation scales, but can be used to look at change over a longer period. These scales focus on what is happening such as relationships, school involvement, and caregiver stress.
- Problem Flags are used to understand a child/youth’s functional impairment.
- interRAI’s tools are designed to be used across multiple care settings as part of an integrated system of assessment. For this reason, the child/youth assessments share some scales with tools from the adult interRAI Suite.
Scales can be helpful to clinical staff by:
- Summarizing client needs: Scale results are designed to provide clinical staff with an objective overview of client needs. Scale results support the development of client-centered care plans, provide the basis for specialized assessment, and can help facilitate referrals to other services if needed.
- Measuring change over time: Because scales provide an objective measurement of a client’s current clinical status, scales can assist with understanding and tracking change in a specific area of need over the course of treatment. This can be achieved either by administering repeat assessments, or, if the client has been assessed using a full ChYMH or ChYMH-DD, by administering monitoring assessments
footnote 10 that are tailored specifically to the needs of that client. - A caution: Pay attention to scale type! Of the three types of scales, Short-term Clinical Presentation scales are the best tool for measuring progress as the items that feed these scales are sensitive to change over a brief period of time.
- Bringing attention to important item-level information: The scales provide overall summaries of a client’s needs, but they can also be useful for calling attention to individual item results that are relevant for a particular client. High scale scores should always be reviewed in detail to identify which items are contributing to the scale results. This review can help clinical staff better understand what areas of the particular issue need to be focused on when treatment planning.
Steps for success: scales
- Look at the results for each scale by reviewing either a graph of the scales results or the scale result itself (the number value).
- Understand what the results mean. High scale scores indicate higher needs. For scales that have clinical cut-offs, use this information to better understand the severity of the scale results.
- Understand what items within a scale contributed to each scale score. These items will help you identify specific areas to address in a care plan, and the responses for each item may help you decide how to prioritize treatment or make triage decisions.
- Compare scale scores over time. With additional full assessment or monitoring assessment results, you can compare results to see how behaviours have changed or remained the same over time. You can then understand how effective interventions have been and determine any areas where the young person needs additional support. Remember to consider how sensitive the scale is to change (for example, is it a short-term scale, a long-term scale, or a problem flag?).
Using collaborative action plans (CAPs) for clinical work
You need to know:
CAPs are more than just a result that indicates whether there the risk of a particular need being present; the result also connects clinical staff to evidence-informed strategies to support the client in that particular area of potential risk.
Each CAP covers a specific area of potential risk, and is tailored to the assessment population – for example, both the ChYMH and ChYMH-DD have a CAP that addresses educational needs, but the ChYMH version and the ChYMH-DD version of the CAP have different areas of focus.
When a CAP is “Triggered”, it means that an algorithm using items from the ChYMH or ChYMH-DD has determined that this is an area of potential risk for the child/youth, and he or she would likely benefit from guidelines in the CAP on this topic.
There are two types of CAPs:
Caps with a single trigger level | The CAP is either triggered (the risk is present) or not triggered (the risk is not present). If the CAP is triggered all the CAP guidelines can be considered. |
Caps with multiple trigger levels | Only one part of the CAP can be triggered at a time, meaning that clinical staff should focus on the specialized area of the CAP guidelines aligned with the risk identified. |
CAPs can be helpful to clinical staff:
At initial assessment: CAPs are designed as clinical tools used to identify children and youth likely to benefit from evidence-based interventions. By identifying areas that may be of concern, CAPs help target efforts towards areas most likely to see a difference. CAPs support evidence-informed treatment, both by providing assessment data to inform care planning decisions, and by linking clinical staff to evidence-informed treatment guidelines.
As “Outcomes”: Although this is not the primary function of CAPs, with some additional steps some CAPs can also be used as tools to evaluate clinical change across time. There are three main cautions to be aware of:
- CAPs do not measure clinical status; they can only show whether or not a risk has been identified.
- Not all CAPs can show change. Review the CAP guidelines to understand the maximum amount of time it can take to go from “Triggered” to “Un-triggered”.
- CAPs with multiple trigger levels need to be interpreted with extra caution. Pay close attention to what the CAP results are reflecting. Changes in CAP results can be interpreted as improvement or worsening only when the different trigger levels reflect severity of need (e.g., the Harm to Others CAP, which has two trigger levels, one flagging the need for immediate intervention and another flagging need for prevention).
For detailed information about the CAPs included in the ChYMH Suite, visit www.interRAI.org
Steps for success: CAPs
- Look at the results for each CAP by reviewing which CAPs were triggered and at what level. Remember that a “triggered” result directs you to a particular risk of need, but some CAPs may also alert you to an elevated risk level (for example, result may read “Triggered due to high risk of…”).
- Understand what issue each CAP addresses. To understand what each CAP addresses, read the Purpose statement at the beginning of the CAP for a quick overview. You can review the Issue section to learn more about why the matter is important to consider as part of quality service delivery, and review the Goals of Care section to understand what treatment outcomes the CAP aims to support. Reviewing the Triggers section will help you understand why the CAP was triggered for a particular client.
- Use clinical judgement and prioritize areas of focus. CAPs highlight areas of risk and offer possible (not mandatory!) guidelines for addressing the risk. Consider which CAPs may be applicable for your client based on your clinical judgement and client choice. Review the Guidelines section of each CAP and determine whether all, or just some, of the evidence-informed guidelines will be useful for your client.
- Compare CAP results over time. Some but not all CAP results can be used to understand change over time through multiple assessments. Review the limits to how much CAP triggers can change within certain timeframes to decide if it is a reasonable measure of change for your purposes.
Summary: using interrai assessment results in clinical decision-making
Using interRAI data to understand a client’s needs
Screening algorithms
Use risks identified by the Screener+ to direct attention to immediate safety needs and inform need for safety planning.
Use urgency flags to guide triage decisions.
Use case complexity definitions to guide determination decisions (e.g., brief vs. longer-term services).
Scales
Use scales to understand the frequency and severity of a client’s mental health symptoms and behaviours.
Some scales have clinical cut-points that show if the young person is in the low, moderate, high, or very high range for that need.
Other scales are interpreted simply by understanding that higher scores = greater needs.
CAPs
Use Collaborative Action Plans (CAPs) to understand the client’s areas of potential risk or need in their day-to-day functioning.
CAPs flag risks in a particular area, such as home, school, or relationships. Clinical guidelines for working with the client are also provided.
Most CAPs only indicate that an issue is present. A select few also give an indication of severity.
Using interRAI data to understand a client’s progress
Monitoring
Use Monitoring Assessments to obtain targeted clinical status data along the treatment path.
Monitoring Assessments can be tailored to the client’s individual needs and goals by selecting the scales and CAPs that matter for each individual client.
Gathering this additional data will help you be nimble in your treatment approach.
Scales
Improvement looks like scores trending downward over time. In scales with cut-points, improvement looks like movement to less severe ranges.
Consider the different scale types. Short-term scales are sensitive to recent change. Long-term scales take time to show change so are not useful for short-term programs. Problem flag scales should not used to measure change.
CAPs
Use caution when examining the results from CAPs to determine client progress. Not all CAPs are capable of showing change.
CInstead, consider using your clinical judgment to determine whether the CAP guidelines have been meaningfully incorporated into treatment.
For detailed information about assessment results outlined here, visit www.interRAI.org
3.5 Using the tools in your toolbox
As leaders of a major change initiative, it can be easy to get swept up in the excitement – and also the stress – of the Launch and Initial Tool Use Phase. Remember, the Implementation Team has worked hard to create robust plans to support your organization through these initial growing pains. Resist the urge to make snap changes to the plan, use the plans and resources that the team has created to guide decisions. Remember that your job is to provide support that sets the foundation for lasting, sustainable change.
Communication plan
Use this tool to launch your carefully planned communications to stakeholders, or to plan for additional communications as the need arises.
Risk management plan
Use this tool to monitor the risks that were identified during Planning. Review this plan regularly and refine it when unexpected risks arise and mitigation plans are needed.
Training plan/calendar
Use the Ongoing Training Plan and Training Calendar to ensure that your organization's training needs are being met.
Leadership activities plan
Refer back to the key leadership tasks, persons responsible, timelines, and plans for how each task will be approached.
Evaluation and monitoring plan
Use this plan to keep on track with monitoring your implementation's success to date and communicating that information to stakeholders.
Mini case scenario #3
At the Implementation Team meeting the Intake supervisor shared that the early feedback from the Screener+ assessors was that they were pleased to have the outcomes report to support decision-making and to “back up” their clinical recommendations, but that they had some concerns about the interviewing process. In particular, they were feeling awkward with not knowing how to phrase some of the questions needed to gather certain items, and they were worried because the interviews were taking longer than they had expected. It was noted, though, that when caregivers were asked about their experiences being interviewed the feedback was mostly positive. The team reviewed options for addressing the Intake workers’ concerns. The training lead reminded the team that assessors were told to expect that their first 5–6 interviews would take longer, and it was decided that an email reminding staff about this expectation might help settle any worries. Since the training plan had forecast that each intake assessor would likely complete 1–2 interviews per day, the Intake supervisor agreed to monitor whether interview completion times reduce over the next week or so. The Intake supervisor also mentioned that some of the assessors seemed more comfortable and had written themselves little scripts and reminders. The supervisor planned to share these during a team meeting later that day to foster collaboration and help those who are struggling.
Organization C launched the interRAI Screener+ and ChYMH earlier this week. The Screener+ was launched in the Intake department, and the ChYMH launched in several outpatient programs. The Implementation Team met to check in on what was going well, what was not, and to discuss plans for supporting staff.
Discussion turned to the outpatient ChYMH launch. Nine ChYMH assessments had been completed by 6 assessors so far. Again, assessors were raising concerns about “taking too long” with their first assessments. Two assessors reported that they’d had to call parents back to gather information that they had missed during the assessment interview. Some staff were suggesting that the ChYMH is “too much” and asking “why can’t we all just do the Screener+?” It was noted that caregivers had not complained about the length of the interview, however some had asked why they were being interviewed about topics other than the reason they were seeking help. Lastly, some assessors expressed difficulty with understanding how to apply outputs for their particular programs. To support assessors during these initial weeks, the team committed to:
- Reminding staff about training tips such as creating “cheat sheets” and scripts
- Drafting a scripted response that staff can use to explain to caregivers that the comprehensive nature of the assessment captures the complexity of the young people seeking support
- Emphasizing the difference between the Screener+ (service determination) and the ChYMH (supporting and measuring intervention) to staff
- Keeping outcomes and clinical use in the forefront through program-specific meetings to support consistent treatment planning and documentation.
Mini case scenario #3 reflection
Guiding questions
- Anticipating the need to support assessors in the early stages, what can your organization put in place to help?
- What action plan does your organization have if staff express concerns about completing the ChYMH Suite assessment(s)?
- How can you support staff to use results from the assessments?
- How is Organization C monitoring staff competency as they gain experience with the assessments? Is this sufficient? How will your organization approach competency monitoring in the early weeks of initial launch?
Phase 3 actions checklist
Confirm and prepare for the launch date.
- Make sure that the chosen “Go Live!” date still makes sense. Consider pressures such as reporting timelines and other initiatives
- If your organization has opted for a “soft launch”, make a plan to mitigate the risks outlined in section 3.1 “Launching Tool Use at Your Organization”.
- Have a facilitated discussion with the Implementation Team and any relevant managers/supervisors about expectations for the Launch and Initial Use phase. Use the information in section 3.1 to support this discussion.
- Celebrate the “Go Live!” day when it comes!
Establish a plan to keep clinical use in the forefront.
- Ensure the Implementation Team and clinical supervisors are knowledgeable about the ChYMH Suite clinical outputs and how they will be used in each program so they can answer staff questions.
- Determine how clinical supervisors will promote use of the ChYMH Suite outputs in different settings such as team meetings, clinical discussions, and supervision meetings.
- Consider sharing regular updates about ChYMH Suite use internally to demonstrate how the organization is using data to understand a client’s needs and progress in different areas. Make sure that tool terminology is used.
Support assessors, trainers, and other clinical staff as they practice, learn, and develop competency while using the assessment(s).
- Ensure that managers and supervisors understand the key areas of competency development for each role (assessor, trainer, non-assessing clinical staff) and the supports that staff will need.
- Encourage and reassure staff who struggle initially.
- Use the checklists in Appendix M to support supervision and troubleshooting relating to competency development.
Review and use the planning documents created during Phase 2.
- Rely on your Communication Plan to guide ongoing, two-way communication with staff and other stakeholders.
- Follow your ongoing Training Plan and Training Calendar to ensure that your organization’s training needs are being met.
- Refer back to the Leadership Activities Plan to make sure that all activities are being completed on time and in the manner that was planned out.
- Ensure that the Implementation Team is regularly checking in on the Risk Management plan.
- Refer back to the Evaluation and Monitoring Plan to ensure that data is being gathered and stored appropriately to support evaluation and monitoring.
Footnotes
- footnote[9] Back to paragraph See Appendix M to access the Competency Checklists
- footnote[10] Back to paragraph Monitoring assessments involve collecting data solely on the items that make up a scale or Collaborative Action Plan (CAP) that is relevant to a child or youth’s treatment plan. These targeted data collection tools should only be completed after an initial full assessment has identified areas of need. Availability of this feature is dependent on your software provider.