Skip to content
← All Deep Dives
Deep Dive · Monitoring & Evaluation

Impact Measurement: Foundations and Frontiers

What we measure, why it matters, and where the field is heading — a working syllabus for evaluators.

RCTs Theory of Change 12 readings
IM
ImpactMojo Editorial
Curated by the ImpactMojo team
This is the working syllabus we use when teaching impact measurement on the platform — the texts our MEL faculty hand to new evaluators on day one. We're actively looking for an invited curator (an experienced evaluator or applied methodologist) to take it further; pitches welcome.
House Pick
Editor's Note

Impact measurement looks like a technical field until you realise that almost every interesting decision in it is political. What counts as evidence? Whose theory of change gets tested? Who pays for the evaluation, and what happens to the report after? The "credibility revolution" of the last twenty years gave us tools to answer some questions extremely well, but it also pushed harder questions — about meaning, mechanism, and equity — to the margins.

This list tries to hold both. It begins with the canonical methodological texts that any evaluator should have read — Banerjee, Duflo, Deaton — and then turns to the critics who have argued, persuasively, that randomised trials answer only a small subset of the questions development needs to ask. The last section is for people who actually run evaluations: the practitioner toolkits and blogs that make the abstract debates operational.

New to evaluation? Start with Poor Economics and the J-PAL handbook. Done a few RCTs and feel uneasy? Start with Pritchett, Deaton, and the Cartwright/Hardie book.

Section 01

Theoretical Foundations

The frameworks that define how the field thinks about causality, evidence, and what an "impact" claim actually means.

Section 02

RCTs and the Credibility Revolution

The methodological case for randomised trials, and the careful critiques from inside the discipline.

Pritchett's argument is sharper than Deaton's: that the most important questions in development — about state capability, growth, structural transformation — are not amenable to randomisation, and that the field's enthusiasm for RCTs has crowded out work on bigger questions. Read with charity, even if you disagree.

Section 03

Critiques and Alternatives

The traditions that argue impact measurement should look very different — more participatory, more theory-driven, more case-based.

The canonical text for realist evaluation, which asks "what works for whom in what circumstances and why?" rather than "does it work?". A useful counterweight to averages-based evaluation cultures, especially for complex social programmes.

Section 04

Practitioner Toolkits

The resources you actually open when you are designing or reviewing an evaluation.

Funder, repository, and methodological hub for impact evaluations across the development sector. The systematic review database and the evidence gap maps are the most useful single resources for understanding what is and isn't known about a given intervention.

A method-and-approach encyclopaedia maintained by practitioners. Especially strong on non-experimental methods and on the politics of evaluation. The "rainbow framework" is a useful starting point for designing an evaluation system from scratch.

When you want to hear evaluators talk about real implementation challenges — how they handled spillovers, how they dealt with attrition, what surprised them — the J-PAL podcast network is unmatched. Particularly useful for graduate students between coursework and first fieldwork.

Suggested citation

ImpactMojo Editorial (2026). "Impact Measurement: Foundations and Frontiers." ImpactMojo Deep Dives. Retrieved from https://impactmojo.in/DeepDives/impact-measurement-foundations.html

Want to curate a Deep Dive?

If you teach, research, or practice in development and have a reading list worth sharing — pitch us.

Pitch a Deep Dive →