With a half dozen Collective Impact evaluations in the last year alone, it’s becoming second nature for me to think about the complexity inherent in evaluating Collective Impact. The model’s emphasis on a shared measurement system has been both a benefit to evaluation and a hindrance. Sometimes I find that recognizing the need for shared measurement has helped my partners to value data at a level that perhaps would not otherwise have been true. Other times, the emphasis on shared measurement has resulted in a perception that all we need is shared measurement. The problem is, shared measurement tells you about your outcomes but doesn’t help you understand what is and isn’t working.
It was exciting to see the new FSG publication, a Guide to Evaluating Collective Impact, because they are talking about this same issue and providing guidance to Collective Impact initiatives throughout the country on where evaluation fits into their work. I particularly appreciate that they highlighted how evaluation looks different depending on the stage of the Collective Impact work, from early years to middle years to later years.
For me, I find evaluation in the early years most exciting. I love the developmental evaluation approach and the case study for the early years in the FSG guide is one of Spark’s projects – an infant mortality initiative. The initiative, which is supported by the Missouri Foundation for Health (MFH), is just entering its second year and is working on foundational relationship and structure issues.
Our role with the initiative was to build everyone’s capacity to use developmental evaluation to inform the work. Developmental evaluation, by the way, is an approach to evaluation that explicitly recognizes that sometimes we need learning and feedback in the context of a messy, innovative setting where the road ahead is unclear.
Thanks to the vision the Missouri Foundation for Health had for the infant mortality initiative, we had the opportunity to both coach all the partners involved on developmental evaluation as well as implement it with the two sites and the foundation. What a great experience!
With one of the sites, the collective impact initiative in the St. Louis region, an area of focus was answering the question: “What is a process and structure for engaging stakeholders – how can we stage the engagement and how can we motivate participation?” The facilitated conversations on stakeholder engagement and interviews with key stakeholders led to a couple short briefs highlighting how people were responding to the messages and processes being used by the backbone organization. The backbone staff recently shared with the foundation that the developmental evaluation findings helped them to adapt in real-time as they prepared for their first Leadership Council meeting and continue to be fundamental information that they regularly refer to as they plan their next steps. That might be the best part about developmental evaluation – you never generate reports that sit on a shelf because the information you collect and share is useful, timely and often critically important for success!
So, what’s my takeaway from all this time spent on Collective Impact evaluation? I really encourage you to consider how shared measurement systems can benefit from adding a more comprehensive evaluation approach. But I also hope you recognize that evaluation for Collective Impact isn’t the same as evaluation for programs. Unlike most program evaluations, Collective Impact evaluation must:
- Be as flexible and adaptable as the initiatives themselves;
- Focus on continuous learning and helping to improve the outcomes of the Collective Impact initiative; and
- Take into account the stage of the initiative – the early years, middle years, or later years.
Want to know more? Join the FSG webinar on June 11th to learn more about evaluating collective impact.