Coming at data collection with a worldview shaped by capitalism often means data points are collected and become assets that are harvested from their environment. The data is usually disconnected from the humans behind the numbers, and the evolving stories and lived experience of vulnerable and marginalized people is often obscured and buried in the numbers. This is a problem. It does not reflect the equity sought in the charitable sector’s values.
How do we decolonize program evaluation in a way that promotes accountability and learning from community-invested resources? The answer might be more straightforward than expected: it really all comes down to ensuring opportunities for self-determination of the evaluated groups in the evaluation process. What does this mean in practice? This can mean ensuring budget dollars for evaluation participants participating in project evaluation activities, and not as an add-on at the end of the project or by some mystery shadow staff that somehow has time to take on something which is likely complicated, requires good understanding of programming objectives, and access to data. If there is no budget built into the project to support evaluation participants to contribute to the evaluation, the cost for this should be carried by the agency requiring or requesting the evaluation.
To ensure more equity in program evaluation activities, the evaluation’s goals and outputs should make sense within the context of the programs, services, and target users, and have the opportunity for the evaluator to provide additional value. This means that evaluation projects should not just be used to gauge the impact and financial ROI for the grantor, but should be an organic conversation about what needs to be learnt in order for the grantee to make better decisions and improve the quality of their work or reach more people. See this great example in Paraguay.
It is not complicated or even revolutionary to value frontline expertise in an evaluation process. As a personal example around participation compensation: Several years ago, I was approached by a researcher looking to host a focus group with our program participants to discuss predatory lenders in Ottawa. The researcher offered our organization $500 to host the session and a $50 gift card to each participant. I cannot tell you how much these small honorariums signaled to my staff and evaluation participants that their time and insights were valued. We prioritized the data collection session, were highly engaged, and remember the evaluation as one of the best we’ve been involved with to date. This was a great opportunity to understand the value of evaluation participant time, as well as to explore some out-of-the box simple gestures that can improve the quality and quantity of data collected. Our time putting together this event and our access to the payday lender key audience was recognized by the researchers.
For larger projects, a participatory action approach (PAR) is highly recommended – an approach that focuses on collective inquiry and experimentation grounded in experience and social history (We’ll write another blog on this approach soon!) Whatever route is chosen, the path must be tailored to the circumstance and complexity of the project while respecting the time, resources and autonomy of evaluation participants. Fundamentally, the reporting process and outputs should speak to the quantity of funding that was given and the complexity of the change being sought.
Another challenge with evaluation is the traditional power dynamics that emerge when grantors make requests and requirements about how grantees should fulfill their evaluation obligations (click here for an amazing read about power dynamics in the NPO world). There is often pressure on the grantee to present their program as massively successful in order to secure future funding. Unfortunately, the perceived need to provide a positive story strips both the grantor and the grantee from the learning opportunity that is likely present from failed or mediocre projects. As someone who accepts failure as part of the landscape of innovation, creativity and project implementation, I’ve always learned more from projects that went sideways than from my successes. One way of getting around this scarcity mindset and power imbalance is by engaging the charities as co-evaluators and researchers into their own programs, and treating them as the subject matter experts that must analyze program success AND failures, perhaps with some guidance around methodology, ethics and tools for assessing and learning the lessons from their program implementation activities.
The question becomes: how do we help foster an engaged and equitable approach to evaluation and engage organizations to be curious, honest and invested in their own learning? How do we create a culture of learning in the non-profit sector? I think we need to flip the tables and treat charities and their clients like the best subject matter experts who can guide the evaluation process towards its most successful and useful outcome. This will help shift the power dynamic and likely be uncomfortable for grantors, but will be a required step in embedding equity and self-determination into this space so that learning and improved quality of services is encouraged and supported across the sector.
Ultimately, evaluation is about passing judgement on a project or program. This immediately creates a tension between the evaluator and the evaluated as well as questions about data ownership and autonomy. A respectful, equitable way forward only happens with humility and a desire to learn.
Equity in Program Evaluation? Part 2: Data Ownership
Equity in Program Evaluation? Part 3: Pick One: Culture of Quality or Innovation or First Responders