Over the next few months, we’ll be releasing a series of blogs on topics we’ll be presenting on at the American Evaluation Association’s (AEA) annual meeting, which will be in Atlanta, GA October 24-29. You can learn more about the meeting, including how to register here.
Google “Collective Impact” and you’ll get roughly 1.8 million hits (including this blog). Although collective impact (CI) is just one path out of many, it is clear the framework has taken hold as a means to tackle complex problems through a systemic lens. By their nature, however, CI initiatives are complex and emergent. The often include a mix of policy, practice, program, and alignment strategies that engage many different organizations and stakeholders. Moreover, it is not uncommon to have a diverse array of stakeholders, including funders, in the mix.
As CI grows, many different leaders are building our understanding of how to best support the work through evaluation. One thing we have come to realize is that, as varied and complex as CI initiatives are, so are the roles of their evaluators. We can be learning partners, developers of shared measurement systems, strategy partners, or even systems partners, helping align evaluation and learning throughout the system. Because of this, our effectiveness as evaluators depends on understanding which roles are needed and when, as well as how to balance these multiple roles.
- Developmental evaluator, providing real-time learning focused on supporting innovation in a complex context;
- Facilitator, helping partners develop and test a collective theory of change, use data to make better decisions, or align systems across evaluations;
- Data collector/analyzer, helping to support problem definition, identify and map the stakeholders in the system, or vet possible solutions and understand their potential for improving outcomes;
- Developer of system-level measures of collective capacity and impact, as well as evaluator of process of CI, providing feedback on how to strengthen it; and/or
- Creator of a shared measurement system, including adapting core measures to local contexts.
This October, I have the privilege to present on this topic at the American Evaluation Association’s annual meeting with Hallie Preskill from FSG, Ayo Atterberry from the Annie E. Casey Foundation, Meg Hargreaves from Community Science, and Rebecca Ochtera here at Spark Policy. Our presentation will look at the varied roles evaluators play in the CI context. It will also look at what funders and initiatives look for from the CI evaluation teams, exploring how knowing how to navigate these varied roles can help evaluation support system change, leading to more effective evaluation activities.
Interested in learning more? Join us at our presentation: The many varied and complex roles of an evaluator in a collective impact initiative!