Collective Mind Community Conversations
Evaluating a network to increase its performance and impact
by Seema Patel, Senior Advisor, Collective Mind
Collective Mind hosts regular Community Conversations with our global learning community. These sessions create space for network professionals to connect, share experiences, and cultivate solutions to common problems experienced by networks.
Sign up for the Collective Mind newsletter to receive these in your inbox!
We kicked off our first Community Conversation of the new year on January 13th, 2021 with John Twigg, an independent researcher and consultant with more than 25 years of experience in collective disaster risk reduction activities. In sharing his work evaluating GADRRRES (Global Alliance for Disaster Risk Reduction and Resilience in the Education Sector), a global action network working to ensure schools are safe from disaster risks, John brought his relevant topical expertise and his previous experience working within networks to the conversation, but also his perspectives and learnings as a network evaluation novice.
Highlights from the conversation
There are a variety of methodologies and approaches that can be used for organizational evaluation, but few are designed specifically for a network model. The methodology used for the GADRRRES evaluation is one likely familiar to many network practitioners: the Collective Impact (CI) approach. John evaluated GADRRRES’ operating model using the 5 core conditions of the CI approach: common agenda, shared measurement, mutually reinforcing activities, continuous communication, and backbone support.
While evaluation frameworks like the CI approach are accessible and available to provide frameworks, undertaking an evaluation tends to reveal some important learnings, as it did in the case of GADRRRES. First, evaluation frameworks may not always go far enough in defining evaluation criteria. GADRRRES’ evaluation took the process one step further to incorporate principles of practice such as equity and inclusion, the existence of a learning culture, and collective impact capacity, i.e. the availability of human resources and funding to further evaluate the effectiveness and impact its network. In this same vein, a second takeaway is that any network evaluation approach should be customized to the specific network model and circumstances and as was the case of GADRRRES, to adapt the evaluation approach to its priorities and its global geographic reach. Third, GADRRRES’ evaluation highlighted elements of its network infrastructure that were missing, such as a lack of a theory of change, having operated under the assumption that all of its members were on the same page. As part of the evaluation recommendations, developing a theory of change could then become an evaluative tool in itself to help GADRRRES reflect on what it’s doing, how it’s doing it, why it’s doing it, and to chart a shared and refreshed path forward. Relatedly, the process of network evaluation opens up a channel to refresh the network’s shared purpose and provides opportunities to use a variety of non-traditional methods to engage and seek feedback from network members.
Community Conversation participants shared a few of the different evaluation methodologies and feedback techniques they’ve employed within their own networks.
One such non-traditional method, outcome harvesting, is focused on considering unanticipated outcomes as evidence of impact. The process of outcome harvesting collects evidence of what has changed and then works backward to determine whether and how an intervention has contributed to these changes. Outcome harvesting is particularly useful in complex settings like networks where outcomes and impacts do not always have a direct line of causation and at times, only a dotted line to correlation.
Outcome mapping is focused on gathering and measuring changes in behavior, actions, relationships, and activities in the people, groups, and organizations it works with directly. It can help a network be specific about the actors it targets, the changes it expects to see, and the strategies it employs and, as a result, be more effective in terms of the results it achieves.
Participatory narrative inquiry uses an approach in which groups of people participate in gathering and working with stories or where story forms are provided and centered around inquiry questions. The stories are then used to find patterns or trends that can inform or support decision-making. Using a narrative approach is particularly useful in providing meaningful insights in complex situations.
These are just a few methods. What was clear from the conversation is that network evaluation is as complex as networks themselves. No methodology is perfect and using one evaluation method is not enough, but choosing and using a methodology is important to establish a baseline to measure progress against, but also to provide renewed learning and member engagement opportunities along the network evaluation process.
Miss the session? View the recording here.
Thanks again to our co-host, John Twigg, and to Lucille Angles from GADRRRES for supporting the conversation!
Have your own experiences with network evaluation? Tell us about it in the comments below.
Join us for the next Community Conversation!
Or email Seema at firstname.lastname@example.org to co-host an upcoming session with us.