Searching through childhood pictures of a clowning workshop I attended as an eight-year-old, I have strong, happy memories of our tumbling presentation, with my parents laughing in the audience. These memories resonate with me as I prepare for an upcoming youth leadership presentation, and have me thinking about how to evaluate brief programs.
Measuring impressions from them can be tricky. However, keeping a few considerations in mind can simplify the process of evaluating brief programs – defined as those lasting fewer than eight hours.
Understand the potential impact
Start with a full understanding of the program’s potential. Stephanie Wilkerson and Carole Haden developed a model outlining the relationship between STEM (Science, Technology, Engineering and Math) out-of-school time program outcomes and duration. This model uses the steps in the change process to isolate the outcomes you can expect from programs based on their duration. This follows other researchers (Patton, Rubin, Reisman & Clegg) who have identified awareness, interest and attitudes as the outcomes that can occur during the initial stages as people consider change.
Develop realistic measures
For an evaluation this means that the measures we use should focus on participants’:
- Awareness of program elements
- Interest in taking action on the program topic
- Attitude on the issues presented in the program
- Before including increases in skills or knowledge as a part of the evaluation, consider the complexity of the skills your program is designed to build. Some audiences have the capacity to gain significant skills and knowledge in a two-hour-long program, but complex skills or knowledge of broad concepts can take longer to develop. Be cautious with the expectations for participants during short programs.
Choose a tool
After identifying what you want to measure, select an evaluation tool. The most common tool is a survey, and it’s popular because it’s easy to interpret, easy to report and familiar to most participants. If you use this tool, there are a few cautions:
- Don’t use surveys for children less than eight years old. Developmentally, a survey is very difficult for children in this age group. They take the questions very literally, they are extremely suggestible and their attention span is limited.
- Be cautious when surveying youth younger than 11. They will have a hard time interpreting questions about groups such as, “Children your age are creative.” Their answers will be strongly skewed to their own point of view. Take care to craft survey questions that are about their own experience. Avoid stating questions as negative statements for this age group – an example would be, “I had never heard about the engineering process until this class.”
- Avoid questions that have more than one meaning; for example, “This session was useful and informative.”
- Be thoughtful when selecting the Likert scale to use for your survey.
Here’s how: Write one Likert scale response (a simple Likert scale is “strongly agree, agree, disagree, strongly disagree”) on each index card and display them on a table. Give each participant one block for each question. Post or read an evaluation question and ask participants to stack their blocks next to the card that describes their answer. Repeat for each question.
Other evaluation methods like focus groups, diaries or interviews take more time out of a brief program, but you might still use them for short programs that repeat. The Harvard Family Research Project offers some perspective on choosing tools.
Have you evaluated brief programs? How? Do they pose challenges for you in terms of outcome evaluation? Do you have any more tips to share?
You are welcome to comment on this blog post. We encourage civil discourse, including spirited disagreement. We will delete comments that contain profanity, pornography or hate speech--any remarks that attack or demean people because of their sex, race, ethnic group, etc.--as well as spam.