For the last couple months, we’ve partnered with the Fleet Science Center to evaluate and understand the impacts of their 52 Weeks of Science (WoS) program in the Barrio Logan district in San Diego.
52 WoS is a community based STEM program focusing on underserved communities. The program relies on a multitude of volunteers and community partners, or STEM Champions, to keep it going. By creating the program with local champions, it ensures that the programming maintains the voice and interests of the community.
The Barrio Logan community program is completing two years of programming and this week are working with Fleet to evaluate their success as perceived by the community of stakeholders. A great time to evaluate program success or potential improvements is around the end of the year. As we move the program forward into other communities, our lessons learned will help to build upon our successes.
There’s no doubt about it, evaluation can be tricky as there are several considerations the evaluation team needs to think through before investing the time and energy of multiple stakeholders.
Here are five techniques we used to get a better understanding of what success looks like in a community based program:
1) Know the audience being evaluated – It was important to get a clear idea of who we were speaking to in these stakeholder conversations. Who were the community leaders, who were the STEM champions, whose voices need to be present? We ultimately decided that there were a variety of stakeholders that required separate conversations and analyses. They included parents, students, teachers, business partners and community leaders. We also examined the audience as we designed the survey asking questions like, how much time to we have before we lose their attention and authenticity? Are we choosing the correct wording? Are we asking leading questions that may impact authenticity? All of these things are considerations as you develop the baseline for evaluation.
2) Think through the format Format matters. From the space, to the facilitator, right down to the snacks. We are designing an environment to solicit the most genuine responses that will improve the next community program. We decided it’s important to collect both quantitative and qualitative data points. This is with the understanding that the discussion format will add more context to survey responses we collect. It’s important to discuss whether or not there might be language barriers and, if so, if translators will be needed. In our case we needed bilingual speakers to make sure the community feels as comfortable as possible sharing.
3) How will we know we are successful? To understand this, we need to examine our specific goals. In this case, since Fleet was heading up the program, we examined what the initial goals of Fleet were when creating this program. We designed the survey and discussion to elicit responses that would provide insight into these higher order Fleet goals. The goals of of the 52 WoS program are to increase the residents’ access to STEM related activities in their community, to increase the interest of science and increase residents’ interest in science as part of their daily lives. We ultimately asked, if there was one major takeaway, what would we want to get out of this? Our response: “We wanted more STEM Champions creating the program content in continuity.” So we designed discussion format and survey accordingly.
4) How will the information be collected? Assign roles during the meeting. Who will facilitate / co-facilitate and are they the best equipped for that position? Who will take notes? Will you record? If you decide to record, you have the benefit of ensuring you are 100% focused on observation and natural conversation without worrying about taking notes. However, you need to make sure you gain appropriate sign off from all those in the room. Keep in mind that recording, especially video recording, has the potential for people to feel less comfortable sharing their opinions so be aware that might be a side effect. We opted with verbal consent via an audio recording.
5) How will we improve the collection methodology to build on results? Might seem too early to think this through – we haven’t even launched the first evaluation?! There are opportunities we know we would take advantage of in the future as we continue our survey development and data collection. For instance, in our case we wanted to understand improvement patterns in a specific community. With Barrio Logan, we did not start formal planned longitudinal evaluation until the end of the two year program. There were several starts and stops with evaluation methodologies which prevented us from understanding the kind of impacts we hoped to obtain. This is an area we would improve the next time around. Before a program formally begins, we would gather a cross section of stakeholders, evaluate them to obtain benchmarks and obtain a community gague. Then once the program is complete, collect the same data and measure before and after responses. Due to our timing we weren’t able to add this layer in our projected results, but we won’t be missing that piece next time around. It will be built it into our timetable.
Ultimately, one of the most important outcomes of the program would be to create STEM champions in the community that will continue the mission of 52 weeks of Science. The longevity of the program spearheaded by community leaders is an important component. We hope to see the results of this across all stakeholder groups.