This is the second in a series of blogs on topics we’ll be presenting on at the American Evaluation Association’s (AEA) annual meeting, which will be in Atlanta, GA October 24-29.
Today’s advocacy environment is complex, with multiple stakeholders working together in campaigns that range from informal networks to collaborative impact and other similarly coordinated efforts. As a result, evaluating these initiatives is equally as complex, looking not only at outcomes, but the roles and contributions of multiple stakeholders. While advocacy evaluation has evolved over the past 10 years, transitioning from an emergent area to an established field of practice, effectively addressing the complexity of multi-stakeholder efforts that may or may not directly align remains one of the most challenging.
You can aggregate to tell the story of a group of organizations, but it’s the aggregate of individual organization evaluations, not an evaluation of a field of organizations. Rather, there is a need to understand the dynamics of how organizations – a term that may also encompass partners in government, private sector, service delivery, etc. – interact, in concert or, sometimes, even at odds. These dynamics are the key understanding how multi-stakeholder advocacy gets to impact along with understanding how organizations come together to influence policy change, build cohesive fields of practice, and accomplish more than any one group can do.
Adding to the Toolbox
This week, I will be presenting on this topic at the American Evaluation Association’s annual meeting with Jewlya Lynn here at Spark, Jared Raynor of TCC Group, and Anne Gienapp from ORS Impact. The session will look at examples of how evaluators work in multi-stakeholder environments to design different methods for collecting and analyzing data. No different from any other field of evaluation, advocacy and multi-stakeholder advocacy evaluations draw on surveys, interviews, focus groups, and observations. While these traditional methods are important, our session will take a look at other frameworks and types of analysis can help strengthen these more traditional processes, such as:
- Assessing mature and emergent advocacy fields, using an advocacy field framework, can help evaluators understand how a field of advocacy organizations collectively influences a specific policy area. The five dimensions of advocacy fields – field frame, skills and resources, adaptive capacity, connectivity, and composition – make it easier to untangle the concept of a field.
- Machine learning, a data analysis approach using algorithms to generate patterns or predictions, is useful in surfacing themes in large, unstructured data sets. It can help address questions such as perceptions regarding a particular issue, differences in perceptions based on geography or language, how sentiment has changed over time, the likelihood sentiment turns to action, and how actions reflect policy decisions.
- Dashboard tracking can help facilitate agreement on measures and create a tracking system to collect relevant data across multiple stakeholders, which is often one of the largest logistical issues faced by multi-stakeholder evaluations, particularly when the groups are working autonomously or across a wide range of activities.
Interested in learning more? Join us at our presentation: Advocacy as a Team Game: Methods for Evaluating Multi-Stakeholder Advocacy Efforts this Thursday, October 27!