How is evidence generated, interpreted and applied for social transformation? The University of Cape Town’s (UCT) Professor Sarah Chapman packed these themes in her recent inaugural lecture.
Professor Chapman addressed the audience in the Mafeje Room at the Bremner Building. Her lecture was titled, “Measuring what counts: Evaluation as inquiry, power and possibility”; and delved into other realms such as the evolution of evaluation practice.
“All of us, no matter our background or profession, engage in evaluative thinking every day. We assess, compare, judge, and reflect. We can’t help but evaluate. It’s intuitive, and that’s part of its power,” said Chapman in her opening reflections on 2 September.
There’s a paradox, however. “Because evaluation feels so natural, we often underestimate the craft it takes to do it well. The challenge lies in sharpening that instinct, applying scientific principles, ethical reasoning and structured methods, while still leaving room for context, voice, and intuition. Good evaluation doesn’t override human judgment, it strengthens it.”
“In theory, an implementation evaluation sounds straightforward.”
She continued: “Evaluation isn’t just a method, it’s a mindset. It’s one of the most dynamic, evolving, and deeply human disciplines of our time.” Over the years, Chapman has experienced evaluation in different forms – a first wave of foundations and causality. The second wave relates to rigour, impact and policy influence. There is also a third wave of complexity, justice and pluralism.
An experienced field practitioner, Chapman has led evaluations across public health, agriculture, education, early childhood development, and disability sectors throughout sub-Saharan Africa.
Implementation evaluator
Take her time with the Millennium Villages Project evaluation team as one example of her illustrious career. The model was bold: invest US$120 per person, per year, in a dozen rural African communities. Deliver a comprehensive package of proven interventions – better seeds, fertiliser, mosquito nets, clean water, healthcare, schools and sanitation. It was about delivering everything that works, all at once.
“I was told there wasn’t really space on the impact team. Instead, I’d be joining the implementation evaluation team. I knew a little bit about impact evaluation from my research training (although to be honest it was mainly restricted to fertiliser plots, locusts, and tea-tasting experiments). But implementation evaluation? I had no idea what we were meant to be doing.
“In theory, an implementation evaluation sounds straightforward. When a programme produces impact, everyone is happy. But when there is no measurable impact, more explanations are generally needed. Unfortunately, there is often not much information on what went wrong because all the money has gone into measuring impact. This is what is called a black box evaluation. The role of implementation evaluation is to narrow down the cause of failure to two possible causes. First, theory failure: the core idea was flawed. You thought doing X would lead to Y, but the causal link just isn’t there.
“The second is implementation failure. The theory is sound; the evidence is solid, but the delivery fell short. It wasn’t done properly, or at all. We messed up. In the Millennium Villages Project, that black box was especially large. This wasn’t one intervention; it was a bundle.”
The lesson learned is that evaluation works well for single interventions. Combine 30, and causality gets messy. Another challenge which presents itself is that theory on implementation evaluation was tailored for smaller and controlled settings.
Chapman noted that at the end of that project she was determined never to be caught off guard again, especially on high-profile projects, without a rigorous impact evaluation plan. “No more hopeful baselines. No more being caught on the backfoot when it came to applying classic experimental principles.
“I slowly began asking a different question. Not so much ‘Can we prove impact?’ but more simply just, ‘Can we explain what this programme is trying to do and why you even believe it should work?’ before we even rush around and start collecting data.”
In conclusion, Chapman shared some key takeaways on the teachings of evaluation. “Evaluation is about learning, not just auditing. It’s not about proving but improving. The second is that it demands humility and pluralism as no method or worldview holds all the answers. Thirdly, it is relational and contextual: who defines success matters. Lastly, it is hopeful work as it assumes that people and systems can improve.”
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Please view the republishing articles page for more information.