Jewlya details a case study of a learning framework supported by a funder and shares how the role of evaluation is to illuminate what is hidden in a system. This blog emphasises how systems change leaders within philanthropy and their many partners also find value in the learning framework and process, alongside evaluation. This thought piece is an output of the MEL in Systems Change Inquiry Group, hosted by the School of System Change in 2023.
A colleague of mine, Jeph Mathias, who works as a participatory evaluator with those most marginalized, once told me that success in his work is when he is able to “make visible the most hidden parts of the system.”
I’ve sat with this for years now, thinking about how often the monitoring, evaluation and learning (MEL) that is designed to support philanthropic systems change does not achieve this aim. Instead, evaluations often make visible what was generally known, if not as fully, by soliciting insights from the same people that program officers already regularly learn alongside, pulling metrics from grant reports, and surveying those one step out, but asking questions that tend to confirm what was already known by the grantees. Learning dialogues tend to be largely informed by what we’re already seeing (we engage in pattern finding of our current insights). Sometimes monitoring might pick up on an emerging pattern, but unless we investigate it and look underneath it, how can we do more than scan across the surface of the system?
Before we go further, let’s define this concept of what I mean by what is hidden in a system. To me, includes:
- The experiences of those most marginalized and harmed have experiences that are hidden from view. Keep in mind - these aren’t just stories of harm, they can also be stories of resilience and insights around alternative ways the system could be designed.
- Where the system is shifting (or being protected from shifts) in response to privileged and hard to access spaces of power.
- How people in the system think about the system - the mental models that bound their ability to conceive of new solutions.
- How change has happened - the history or patterns of change in the past that brought us to today; and
- How change is really happening right now - not our assumptions and predictions, but the actual causal pathways with all their non-linear complexity.
In the MEL Inquiry Group dialogue organized by the School of System Change, my colleagues in evaluation and systems change emphasized how important it is for an organization intent upon changing systems to have a robust learning practice and support it among their systems change leaders. One of my favorite parts of the discussion was when we surfaced that learning is problematic when it’s a beautiful “experience” - the more we put into the quality of experience for the closed group of participants, the more bounded we risk it being. When we allow for a messier, more participatory process, that is when we can make the hidden visible and bring value to the work.
I agree, and I also have a suspicion that a robust learning practice this is not enough, in part because I work with teams and organizations whose learning practices are embedded, ongoing, and high quality, and yet they too are identifying that something is missing (e.g. Imaginable Futures has shared their inspiring learning practice here). Robust learning processes can help surface and test mental models; can help make sense of the information coming in and support experimentation. But unless new insights are solicited from those who are often not heard in systems (those marginalized and those with access to hidden power) and unless questions are asked in ways we often do not (e.g., discovering present and past causality), we cannot discover the other hidden dynamics of a system.
This is where the work of Marina Apgar, focused on deeply participatory evaluation practices within complex, systemic change settings, has helped to expand my thinking. Her participatory action research approach is led by change agents, including those most affected by the problems, and investigates causal pathways by tapping into the information only those most marginalized in the system can see. She supports those whose voices are marginalized and overlooked to be the ones who make sense of what is happening in the system and assume a leadership role in taking action (check out the CLARISSA project’s approach to causal pathways to learn more). The CLARISSA work is evaluative in nature, but changes whose knowledge is centered. It is learning, but the learners are not the program officers and grantees, but instead the children in the system.
If we all had the skills, resources, time and reach to engage in this type of learning-to-action throughout complex systems, imagine the change that would become possible? While I think this type of work can be embedded into existing efforts and pieces of it adapted and used in many ways, I have also been wondering how to move from one-off amazing projects and evaluations like CLARRISA into an overall framework that helps make the hidden visible and brings an evaluative lens (not just a strategic learning lens) to the learning processes.
Working with Imaginable Futures (IF) over the last year, I’ve had the opportunity to explore this notion of an overall framework through a partnership with Erin Simmons (IF’s Global Head, Operations and Strategic Projects) and Amy Klement (IF’s Managing Partner). We began by listening to how learning was already happening in the organization and how IF’s internal staff and external systems change partners do their work – what insights do they naturally discover and use, what could be more systematically discovered? We also learned from systems change peers, digging into the M&E Sandbox’s exploration of evaluation innovations for systems change. We discovered that we are not alone in grappling with how to do this type of learning, and while our others in philanthropy and systems change have developed many fascinating and thoughtful practices, none of them quite tackled the questions we were noodling.
This exploration led to the design of a framework that begins with systems sensing (how and why is the system changing) before moving on to exploring contribution stories (how, where, and to what extent are our grantees and our work contributing to systemic changes).
Figure 1 makes visible our thinking, thinking that has now been refined in partnership with staff and leaders at IF. It is not intended to replace any of the learning practices each team at IF has already developed, customized to their culture and needs. Rather, we hope it can help teams go deeper, discover what is currently hidden, and guide their systems change work in ways not previously available.
How can this framework help us make the hidden visible?
- Ideally the systems sensing part (Level 3a) is participatory, going beyond what internal staff can see through their engagement with grantees (seeing into what is hidden from the program staff in their day to day work). An example of this is already underway with the engagement of grantees in a participatory systems sensing and outcome harvesting process in Brazil.
- We agreed that some of these contribution stories (Level 3b) need deeper evaluations, which will help make the hidden stories of change more visible. Yet, other contribution stories do not need additional data collection and evaluation – sometimes the learning that is visible through partnerships with grantees and others will be enough.
- We also agreed that the good work grantees are doing, including as a result of direct services, needs to be made visible and celebrated even when it isn’t systemic change work (Level 2).
- Finally, we honored that each team with IF implements this framework may have elements in common, but will also be distinct, tied to their learning practices and overall team culture. This allows them to share the story of systems change without losing meaningful elements in the effort to fit it into a one-size-fits-all model.
Let me conclude by naming the many questions that remain for me - questions about this framework, but that could easily apply to many other philanthropic learning frameworks in the context of systems change:
- In what ways will we achieve what Jeph Mathias called for - making the most hidden parts of the system visible? What will remain hidden despite our attempts?
- How will the strong culture of learning adapt as a new framework comes in, and will it become a culture that actively seeks going beyond what a team can discover through close partners, into desiring and seeking what is not known, not visible, and could transform their understanding of the system?
- How can we prevent the framework becoming yet another philanthropic learning framework that helps us organize our thinking more clearly, but doesn’t change what we see in the system?
- How can the deeper investigation called for in Level 3b (related to contribution) be undertaken in ways that honor the learning culture, the partnerships with grantees and other stakeholders, and the nature of the systemic change work? How can it weave into the in-country, on the ground learning, rather than becoming an externally imposed accountability tool?
Perhaps the most important question to keep in mind, as a framework like this is tested, or as you test your own emerging approaches to making the hidden visible:
- Who is benefiting from this new approach and in what ways and how is the approach causing harm? How can the benefit be expanded to others, even as the harm is prevented or mitigated?
The real test is not whether an evaluation partner can help implement this framework, collecting data and telling stories, or whether the process feels doable and accessible to the people involved. The test of this type of philanthropic framework is whether the systems change leaders within philanthropy and among their many partners find value in the concept and practice, discover something they couldn’t see before, and discover it in ways that lead to meaningful actions. I look forward to seeing what emerges.