<- Overview


Program evaluation allows organizations to constantly evolve their interventions into high quality, high impact interventions that improve the lives of participants and followers and allows for the most effective allocation of resources, including time and money. However, conversations about evaluation techniques like the balance between summative and formative assessments, a full range of evaluation methods and creativity in the evaluation process take on a slightly different tenor when it comes to program evaluation during a crisis.

Nevertheless, even when a crisis requires a move from In Real Life (IRL) comprehensive sexuality education (CSE) programs, evaluation of that process is necessary, but will look different from a program evaluation that is implemented in non-crisis environments.

The most important thing to start with is space for critique, revision of approach, and suggestions. Because crisis situations are by their nature changing quickly, the response needs to be similarly nimble. This involves openness to creativity, concerns, and new approaches to everything from the program itself to the evaluation of the program.

The goals for any evaluation or analysis during a crisis should be immediate. Rather than considering how effective the program is (which is fruitless, since crisis situations change too quickly and inevitably the next crisis will be entirely different from this one), the goals are to inform changes for the next iteration of crisis-level support and intervention. Below are a series of ways to incorporate this approach, rather than a strict program evaluation, into a crisis environment.

What to Evaluate

Changing human and technological landscapes needs aside, there are a few considerations that transcend these transitory issues that can become overwhelming during a crisis. Hiring M&E professionals can help immensely with this process, when finances and timing allow.

Participant Participation

The degree to which participants are participatory and responsive is a great way to evaluate the effectiveness of an intervention. Particularly during a crisis, when attention is at a premium, if participants are choosing to allocate some to your program, it is clear that you are doing something right.

On a synchronous platform, participation can be considered in a variety of ways. When everyone’s cameras are turned on, it is visually clear whether or not they are participating in thought if not verbally. In evaluation mode, make sure that someone other than the facilitator is taking notes on participants’ level of engagement. Asking participants to engage in specific ways like typing something into the chat box, answering a poll, engaging in a shared document etc, can provide information about whether participants are actively engaged even if they have their camera turned off.

It is easy to assess participant engagement with synchronous LMS platforms because it is not possible to participate passively. Reading participants’ responses and contributions to make sure that they offer meaningful, thoughtful content that is clearly from them is important. It is also critical for the staff member or volunteer to model that active participation. If they are not setting a standard of replying, engaging in active dialogue on the platform, no one else will either.

Social Media

Figuring out how and what to evaluate in relation to social media is a constantly evolving process that professionals are hired to do all the time. However, if an organization wants to dive in, but without the financial expenditures associated with hiring someone with an expertise in social media marketing, here are some resources to get started:

Using these resources as a guide, along with the other guidance on      how to assess programs during a crisis situation, staff will be able to design a social media campaign assessment specific to their specific needs.

Pre- and Post-Tests

When participants are engaged for long enough (which may only be an hour!), it becomes possible to do very short pre- and post-tests to allow for a more standard form of programme analysis. Very quick questions like:

  • What do you want to learn today? / What did you learn today?
  • What technology do you have available to you right now? / Was the technology today easy to use?
  • How did you find out about this programme? / Rate the programme on a scale of 1 – 5 (where 1 is the worst and 5 is the best)

These three questions have the potential to offer very clear directions on the immediate content and how to make the content more targeted, accessible and findable. They can also be used for both synchronous and LMS programmes. Polling (via a synchronous platform or using an external platform) is a great way to do this process. Take care to consider whether you want the participants’ answers to be anonymous and/or viewable. It is possible for anonymous answers to be viewable or not and there are pros and cons to both approaches.

These questions can also be used on social media platforms, particularly when dispersed among other content. However, it is important to note that the pre- and post- nature of the questions will not be relevant because it is not necessarily clear at what point the participants who answer the questions joined the social media platform or started engaging with the social media campaign.

Whenever you give participants tools to provide feedback on your sessions, make sure that you give clear directions to them about the difference (if any) between tools to evaluate their experience of the session (e.g. post-session questionnaires) and other communication channels available to them in case they need to report a concern about their safety (e.g. an email address that links them to your organization’s safeguarding person).

Facilitator’s Journal

Keep a facilitator’s journal to note reflections regarding the process, concerns, resistances, difficulties/challenges but also successes and achievements. It is important to record participants’ reactions, and particularly the changes observed in participants’ knowledge, attitudes and stances. Essentially, journal writing needs to concentrate on thoughts, feelings, fears, desires and needs of both the trainer and the participants.

Sample questions to guide the trainer’s self-reflection could include:

  • Overall, how did you feel after the class/intervention?
  • What do you think went well? What are you most happy about?
  • How did participants respond when you implemented the activities?
  • What made a particular impression on your participant’s positive responses or reactions?
  • What made a particular impression on your participant’s negative responses or reactions?
  • What was particularly challenging for you? How did you overcome this challenge?
  • Next time what would you do differently?

Kept over time, a facilitator’s journal is a particularly powerful method of formative assessment, which is the most appropriate kind to use during a crisis intervention. It can also be used as a wellbeing monitoring tool that organizations can use to help keep an eye on peer educators.

Program Review/Support by External Experts

Collaboration is a critical element of the most effective CSE implementation at any time. Because the goals and the necessary approaches to achieve them during a crisis are ever-shifting, even experienced sexuality educators need support to achieve those goals. Outside experts in a range of fields, including those in fields neighbouring CSE like social media, public health and youth engagement may all have unique and useful ways to add to intervention evaluation during a crisis.

What may be different in this process from a standard program review is that the expert(s) need to engage with the initial program design and implementation rather than merely reviewing it after the fact. This way they can offer a formative perspective rather than merely a summative one.

Post-Crisis Evaluation

After a crisis has ebbed, and a return to some normalcy has taken place, assessing the organizational CSE response, both internally and externally, has a lot of value. Evaluating the program, intervention, campaign or curriculum that was specifically implemented is not critical. Rather, evaluate elements like crisis preparedness, team communication and cohesion and speed of re-creating and revising the new program based on the crisis because these are all elements that speak to the organization’s preparedness to handle the next crisis (whatever it might be) as effectively as possible.

Analysing and Using Evaluation Data

Gathering data about an intervention, or an intervention process implemented during a local, regional or global crisis offers nothing without an effective analysis and subsequent revisions based on outcome.

How each piece of data is analysed will depend on the kind of data. To analyse open-ended evaluation responses (as in pre- and post-tests), look for overlapping patterns or ‘themes.’ Group replies together under the same ‘theme,’ even if they’re worded differently. For example, “it was interesting” and “I learned about new things” and “super thought provoking” could all be grouped together as “Interesting.” Pay attention to content that participants particularly mention. For example, if multiple people mention that they have continued to think about sexting since the intervention, this is important to note. Pick out only the responses and categories that appear most often. There is no need to report on every single answer, especially if that answer seems to be stand-alone. Other kinds of data, like social media metrics and program reviews and support from experts obviously require different kinds of analysis and implementation based on the kinds of information they provide – extensive metrics and detailed, qualitative information.

Evaluation data often provides specific and useful direction for what has been done well. It does not always provide insight into how to implement CSE more effectively – particularly when participants are experiencing a crisis and may not be able to consider how participants will want to engage with CSE in different crisis situations (even including the current crisis, only at a point further down the line). Therefore, instead of looking to evaluation data for specific guides for modifications, instead use it to point towards possibilities. Organizational expertise, creativity and openness to rethinking from staff, volunteers and youth, is what takes evaluation data to the next level. This perspective is what will allow the evaluation process to shape and reshape interventions to be increasingly effective.

When Not to Evaluate

It is important to note that evaluation may not always be in the best interest of a CSE organization during a crisis situation. When a disaster or other crisis has shifted the way an organization works, it is likely to continue shifting it relatively quickly. Therefore, organizational impact may be based on its ability to shift the way it works quickly. Being restricted by an evaluation protocol may be costly both financially and in ways that it reduces effectiveness rather than allowing it to increase over time.

Instead of focusing on the rigorous evaluation methods that are usually recommended, track implementation activities, have a core set of questions that guides immediate decision-making (like the Questions to Ask Yourself at the end of each of the sections in this set of Guidelines), continually engage with learners on what is the most effective, safest way to reach them throughout the crisis, and follow their lead. For example, if you have been reaching out to participants via two social media platforms and find that one consistently results in more responses, you may shift your focus to the more active platform. (Be sure to let your users on the smaller platform know you’re moving so they can go with you!)