fullscreen
ImpactMojoMEL Basics 101www.impactmojo.in
ImpactMojo 101 Series · Free Forever
MEL
Basics
101
Monitoring, Evaluation & Learning — A Practical Introduction for Development Practitioners
Monitoring Evaluation Learning 100 Slides Free Access
ImpactMojoMEL Basics 101www.impactmojo.in
What We Cover in 100 Slides
01
What Is MEL and Why It Matters
Slides 3–10
02
Theory of Change
Slides 11–18
03
Results Frameworks & Logframes
Slides 19–26
04
Indicators: Designing & Selecting
Slides 27–34
05
Data Collection Methods
Slides 35–43
06
Evaluation Design
Slides 44–52
07
Attribution & Contribution
Slides 53–60
08
Learning Systems & Adaptive Management
Slides 61–68
09
Communicating Evidence
Slides 69–76
10
MEL in Practice: Common Failures
Slides 77–84
11
MEL for Different Contexts
Slides 85–92
12
Further Reading & Resources
Slides 93–100
ImpactMojoMEL Basics 101www.impactmojo.in
01
Section One
What Is MEL and Why It Matters
ImpactMojoMEL Basics 101www.impactmojo.in
MEL: What Each Letter Actually Means
M — Monitoring
Tracking What Is Happening
Continuous, systematic collection of data on programme activities and outputs. Tells you what was done and whether it was done as planned. Ongoing throughout programme life. Primarily for management.
E — Evaluation
Assessing What It Achieved
Periodic, structured assessment of programme effectiveness, efficiency, relevance, and impact. Asks whether the programme worked, for whom, and why. Conducted at defined points — midterm and/or end of programme.
L — Learning
Using Evidence to Improve
The processes by which evidence from monitoring and evaluation feeds back into decision-making to improve programmes and strategy. Often the most underdeveloped of the three. Knowledge management, adaptation, and institutional memory.
Why the order matters: Monitoring without evaluation tells you what happened but not whether it mattered. Evaluation without monitoring lacks reliable data. Both without learning produce reports nobody acts on. MEL as an integrated system — not three separate functions — is the key shift from compliance to genuine improvement.
ImpactMojoMEL Basics 101www.impactmojo.in
Why MEL Matters: Four Distinct Functions
Accountability
Demonstrating to funders, government, and communities that resources were used as intended and achieved stated results. Required for donor compliance. Also — accountability to beneficiaries.
Learning
Understanding what works, for whom, and in what conditions — to improve current and future programming. The least prioritised function but the most valuable long-term.
Management
Real-time information for operational decisions — is implementation on track? Are there early warning signs of problems? Allows course correction before it's too late.
Advocacy & Influence
Evidence from well-designed evaluations can influence policy, funding priorities, and sector practice beyond the organisation. Impact at scale through knowledge sharing.
The accountability-learning tension: MEL systems designed primarily for donor accountability tend to produce data that looks good on paper but tells organisations little about what is actually working. Organisations prioritise what gets measured for compliance — not what generates genuine learning. The challenge is designing MEL systems that serve both functions without one colonising the other.
$20B+
Spent annually on development evaluation globally — most in compliance reporting rather than learning. 3ie estimates.
40%
Of major NGO evaluations are rated as not used to improve programming — Oxfam internal review 2019
ImpactMojoMEL Basics 101www.impactmojo.in
A Brief History of Development Evaluation
PeriodDominant ApproachKey Features
1950s–70sProject evaluationFocus on infrastructure; input-output accounting; World Bank led; economic rates of return
1970s–80sLogical Framework introducedUSAID develops logframe (1969); structured planning tool becomes sector standard; criticism begins immediately
1980s–90sParticipatory methodsPRA (Chambers), PAR emerge; community voice in assessment; qualitative methods gain legitimacy
1990s–2000sResults-based managementNew Public Management; donor pressure for outcomes not inputs; MDGs accelerate indicator proliferation
2000s–10sRCT revolutionRandomised trials as "gold standard"; Banerjee, Duflo, Kremer; J-PAL; 3ie; controversy about external validity
2010s–presentLearning and complexityDevelopmental evaluation; PDIA; adaptive management; systems thinking; MEL as culture not compliance
Why history matters for practice: The tools and frameworks currently used in MEL carry the assumptions of their origins. Logframes were designed for discrete infrastructure projects — not complex, adaptive social change programmes. Understanding where a tool came from helps practitioners know when it fits and when to look elsewhere.
The Indian context: India's development sector MEL is shaped by multiple overlapping traditions: government MIS systems (NITI Aayog, DMEO), donor-driven logframe and RBM requirements, academic evaluation culture (NIRD, IIPA), and increasingly CSO-led participatory approaches. No single approach dominates — practitioners navigate all of them simultaneously.
DMEO: Development Monitoring and Evaluation Office under NITI Aayog — India's central government evaluation body. Evaluates flagship government schemes (MGNREGS, PMGSY, Aspirational Districts Programme). Its reports are often more candid than ministry self-evaluations and are publicly available.
ImpactMojoMEL Basics 101www.impactmojo.in
MEL in India: Who Demands It and What They Want
India's MEL landscape is shaped by who is funding and who is implementing. Each principal in the funding chain has different MEL requirements — which are often not aligned with each other and sometimes not with programme improvement either.
PrincipalWhat They Want from MEL
International donors (FCDO, USAID, EU)Standardised indicators; logframe compliance; outcomes data; value for money; gender disaggregation
Indian philanthropies (TATA, Azim Premji, Gates)Theory-based evaluations; learning focus; innovation evidence; scale-readiness assessment
CSR fundersOutput counts for reporting; beneficiary stories; visibility; annual reporting cycle
Government (MoU partners, state govt)Alignment with government schemes; convergence data; MIS integration; political sensitivity
CommunitiesAccountability to them; participation in assessment; feedback on what isn't working; recognition
The multiple principal problem: An organisation funded by FCDO, an Indian foundation, and CSR simultaneously may face three different indicator frameworks, three different reporting templates, and three different evaluation timelines — all for the same programme. MEL harmonisation across funders is one of the most practically significant sector challenges.
FCRA and MEL: Post-2020 FCRA amendments have reduced international funding flows to Indian CSOs. This has shifted MEL demands — more domestic funders (Indian foundations, CSR) who have different and often less standardised MEL requirements. Some CSOs report MEL burden decreasing with shift to domestic funders; others find domestic funder requirements less rigorous but less useful for learning.
ImpactMojoMEL Basics 101www.impactmojo.in
The Results Chain: Inputs, Activities, Outputs, Outcomes, Impact
01
Inputs
02
Activities
03
Outputs
04
Outcomes
05
Impact
Inputs
Money, staff, materials, time, expertise invested in the programme
Example: ₹50L budget, 3 field staff, training materials
Activities
What the programme does with those inputs — the work
Example: Training sessions, farmer group meetings, seed distribution
Outputs
Direct, tangible products of activities. Within programme control.
Example: 500 farmers trained, 200 farmer groups formed
Outcomes
Changes in knowledge, behaviour, practice, or conditions. Medium-term.
Example: Farmers adopt climate-resilient varieties; yields stable
Impact
Long-term, fundamental change in people's lives. Often outside programme control alone.
Example: Reduced rural poverty; improved food security over 5+ years
The monitoring-evaluation divide: Monitoring typically tracks inputs, activities, and outputs — what is within programme control and measurable in real time. Evaluation assesses outcomes and impact — changes in the world that require longer time horizons and more rigorous methods. This is why evaluation cannot replace monitoring, and vice versa.
ImpactMojoMEL Basics 101www.impactmojo.in
The Output Trap: Why Counting Activities Is Not Measuring Change
The most common MEL failure in development practice is conflating outputs with outcomes — reporting on what was done as if it demonstrates what changed. Outputs are necessary but not sufficient. A training session (output) does not automatically produce changed practice (outcome). The gap between them is where programmes frequently fail.
The output trap in action: An NGO reports: "We conducted 240 training sessions reaching 4,800 participants." This is an output. The questions it cannot answer: Did participants learn? Did they change their behaviour? Did their situation improve? Did the change last? Without outcome data, the training count proves only that activities happened — not that they mattered.
Why organisations fall into this trap: Outputs are easy to count, available quickly, and within the organisation's direct control. Outcomes require time, methodological effort, and often show mixed results that are hard to present positively. Donor reporting requirements often accept output data, removing the incentive to measure harder.
Output vs Outcome: Recognising the Difference
Programme AreaOutput (what happened)Outcome (what changed)
Education500 children enrolled in school500 children reading at grade level by age 10
Health10,000 bed nets distributedMalaria incidence reduced 30% in target area
Livelihoods200 women received seed capital200 women with sustainable income above poverty line after 2 years
WASH500 toilets constructed500 households using toilets consistently (ODF status maintained)
Agriculture300 farmers trained on SRI250 farmers practising SRI with documented yield improvement
The test: Ask "so what?" after each indicator. "We trained 300 farmers" — so what? "Farmers' yields improved 20%" — so what? "Household food security improved" — so what? Keep asking until you reach an answer that is about fundamental change in people's lives. That is your impact level.
ImpactMojoMEL Basics 101www.impactmojo.in
MEL as an Integrated System: The Architecture
Effective MEL is not a set of tools used at different times — it is an integrated system where each component informs and depends on the others. The system starts with a clear Theory of Change and flows through all programme phases.
MEL System Architecture
1
Theory of Change
Articulates the change hypothesis — what are you trying to achieve and how?
2
Results Framework / Logframe
Operationalises ToC into measurable indicators across the results chain
3
MEL Plan
Specifies who collects what data, how, and when — the operational blueprint
4
Data Collection & Management
Surveys, interviews, observations, records — with quality control and storage
5
Analysis, Reporting & Learning
Making sense of data and feeding it back into programme decisions
The MEL plan: One of the most neglected documents in development programming is the MEL Plan — a detailed, operational document specifying every indicator, its definition, its data source, the collection tool, the responsible person, the collection frequency, and the analysis method. Without a MEL plan, monitoring becomes ad hoc and evaluation becomes a scramble.
Design phase
MEL system must be designed before the programme starts — not added on after problems emerge. Retrofit MEL is consistently inferior.
5–10%
Industry norm for MEL as share of programme budget. Underfunded MEL produces incomplete data. Overfunded MEL produces data nobody uses.
ImpactMojoMEL Basics 101www.impactmojo.in
02
Section Two
Theory of Change
ImpactMojoMEL Basics 101www.impactmojo.in
Theory of Change: The Programme's Causal Hypothesis
Theory of Change
An explicit articulation of how and why a programme expects its activities to lead to intended outcomes and impact. It maps the causal pathway from inputs to change — specifying the assumptions that must hold for each step to follow from the previous one. A ToC is a testable hypothesis about how social change happens.
"If-then-because" logic: A ToC can be expressed as a series of if-then statements: "If we provide climate-resilient seeds (activity), then farmers will adopt them (outcome) — because farmers are risk-averse and need reliable evidence of performance before changing practices (assumption)." The "because" — the assumption — is where most programmes fail to be explicit.
What a Good ToC Includes
  • Problem analysis: Clear articulation of the problem being addressed and its root causes — not just symptoms
  • Change hypothesis: How the intervention will address those causes — the mechanism of change
  • Causal pathways: The steps between activities and impact, with each link made explicit
  • Assumptions: The conditions that must hold at each step for the logic to work — external factors, enabling conditions, community readiness
  • Boundaries: What the programme will and won't address — contextual factors it works with but cannot change
  • Stakeholders: Whose behaviour needs to change; who enables or blocks the change
  • Preconditions: What must already exist for the programme to work as designed
ImpactMojoMEL Basics 101www.impactmojo.in
Assumptions: The Hidden Infrastructure of a Theory of Change
Assumptions are the conditions that must hold for the causal logic of a ToC to work. They are not risks — they are the unstated premises in the programme's reasoning. Making them explicit is one of the most valuable exercises a programme team can do, because untested assumptions are the most common cause of programme failure.
The assumption audit: For each arrow in a results chain, ask: "What must be true for this step to follow from the previous one?" Write it down. Then ask: "Do we have evidence this is true in our context? Or are we assuming it?" Untested assumptions about beneficiary behaviour, community readiness, or enabling environment are where most ToCs unravel.
Classic failing assumptions in Indian development: "Women will use the savings account once it is opened" — assumption: they have control over household finances. "Farmers will adopt the improved seed" — assumption: they can access credit to buy it. "Communities will maintain the water pump" — assumption: a functioning maintenance committee will form. Each is an untested social assumption, not a logistical one.
Assumption Types and How to Test Them
Assumption TypeExampleHow to Test
BehaviouralParticipants will attend training if offeredPilot; attendance data; FGDs on barriers
ContextualGovernment services are accessible and functionalBaseline mapping; key informant interviews
CausalKnowledge change leads to behaviour changeLiterature review; tracer studies post-training
Social normHusbands will support wives' participationFormative research; KAP surveys; FGDs
MarketBuyers will pay premium for quality produceMarket research; pilot sales; price surveys
ImpactMojoMEL Basics 101www.impactmojo.in
Building a Theory of Change: Process Matters as Much as Product
A ToC developed by one person on a laptop for a funding proposal has little value. The process of building a ToC — who is in the room, what evidence informs it, whether it gets challenged and revised — determines whether it becomes a genuine guide to practice or a compliance document.
1
Start with the problem, not the solution
Problem tree analysis; root cause identification; whose problem is it?
2
Work backwards from long-term impact
What does success look like in 10 years? What needs to happen for that?
3
Involve programme staff, community, and partners
Participatory ToC workshops surface hidden assumptions and knowledge
4
Stress-test every causal link
Ask "why would this lead to that?" for every arrow; name the assumption; assess evidence
5
Revise regularly — not just at design
ToC is a living document; update when assumptions are violated or context changes
ToC vs logframe: A Theory of Change comes before a logframe. The ToC articulates the change hypothesis in narrative and visual form — including complexity, assumptions, and contested claims. The logframe then translates the ToC into a structured planning matrix with measurable indicators. Many organisations skip the ToC and go straight to logframe — producing a plan that nobody can explain in plain language.
Common ToC weaknesses in Indian NGO proposals: (1) The ToC is a post-hoc justification for a programme already designed rather than a design tool; (2) Activities are listed as outcomes ("200 women will form SHGs" — that is an output, not a change); (3) The causal pathway from community to systemic change is missing — the ToC stops at individual behaviour change and never explains how that aggregates to structural impact; (4) Assumptions are not named at all.
ImpactMojoMEL Basics 101www.impactmojo.in
Complexity and Theory of Change: When Simple Logic Fails
The standard results chain assumes linear causality — activities lead predictably to outputs lead to outcomes lead to impact. But many development challenges are complex — meaning they involve non-linear dynamics, emergent outcomes, and multiple feedback loops that a linear ToC cannot capture.
Complicated vs Complex
Complicated problems have many parts but are ultimately knowable — like building a bridge. Best practice can be identified and replicated. Complex problems involve human systems, power, culture, and feedback loops — like reducing domestic violence or improving governance. You can't reliably predict outcomes. The Cynefin framework distinguishes these.
Implications for MEL
  • Emergent outcomes: Complex programmes produce outcomes that were not anticipated in the original ToC — positive and negative. MEL systems must be able to capture and learn from emergence, not just measure pre-specified indicators.
  • Multiple pathways: In complex systems, the same input can produce different outcomes in different contexts. MEL must track context, not just intervention, to explain variation.
  • Feedback loops: Outcomes loop back to change the conditions for the programme — positive feedback can amplify success; negative feedback can undermine it. A static logframe cannot capture this dynamic.
  • Contribution not attribution: In complex systems, you rarely caused an outcome alone. The appropriate question shifts from "did we cause this?" to "what was our contribution?" — requiring different evaluation methods.
Practical response: For complex programmes, use a ToC that explicitly maps feedback loops, names multiple pathways, and builds in regular revision cycles. PDIA (Problem-Driven Iterative Adaptation) and developmental evaluation are specifically designed for complex change — not logframe-driven planning.
ImpactMojoMEL Basics 101www.impactmojo.in
Two ToC Examples: Weak vs Strong
Weak ToC: Girls' Education Programme
"We will build schools and train teachers. Girls will attend school, learn, and this will increase women's economic participation and reduce poverty."
  • No explanation of why girls aren't in school now
  • No mention of household-level barriers (marriage pressure, safety)
  • Massive leap from "attend school" to "economic participation"
  • No assumptions stated
  • No explanation of which girls, which pathway, what timeline
Stronger ToC: Girls' Education Programme
"Girls in our target area face dual barriers: household-level pressure for early marriage and inadequate menstrual hygiene facilities in schools. We address both: by working with parents and community leaders to shift norms around girls' education (assumption: norm-shift is possible in 3 years), and by upgrading school WASH facilities (assumption: infrastructure is a binding constraint). If norms shift AND infrastructure improves, enrolment and retention will increase — leading, over 5–10 years, to higher women's education levels, which evidence shows is associated with delayed marriage, lower fertility, and improved household incomes. We acknowledge that employment opportunities and social norms beyond school also determine economic outcomes — these are outside our direct control."
  • Problem analysis explicit
  • Dual pathway explained
  • Assumptions named and testable
  • Boundaries of programme acknowledged
  • Evidence base cited
ImpactMojoMEL Basics 101www.impactmojo.in
Change Mechanisms: How Development Programmes Produce Change
Behind every ToC is an implicit theory of how change happens. Making this explicit — the mechanism of change — is what separates a logistically coherent plan from a theoretically grounded one. Different mechanisms require different interventions, different timelines, and different evidence bases.
MechanismHow It WorksExample
Information / KAPKnowledge → attitude → practice: people change behaviour when they know better. Weak for entrenched behaviour.Health education campaigns; nutrition counselling
Economic incentivesBehaviour changes when it is more profitable or cheaper. Effective for market-oriented change.Conditional cash transfers; insurance products; market linkages
Norm changeSocial norms drive behaviour. Change requires critical mass shifts in what is seen as acceptable.Community conversation models; SASA!; Tostan
Power shiftMarginalised groups gain power (economic, political, social) to claim rights and resources.SHG federations; collective bargaining; rights-based advocacy
Systems changeChanges in rules, policies, and institutions create enabling environment for improved outcomes.Policy advocacy; institutional strengthening; market systems development
The mechanism shapes the measurement: If your mechanism is "information leads to knowledge leads to behaviour change" — then you need to measure knowledge change AND behaviour change AND test whether the knowledge actually drove the behaviour (not just whether both changed). If your mechanism is "economic incentive" — you need price data, adoption data, and an understanding of the price point at which behaviour shifts.
Mechanism mismatch: One of the most common ToC failures is implementing an information-based intervention for a behaviour driven by social norms or economic constraints. Training women on maternal nutrition does not improve nutritional outcomes if the binding constraint is household food insecurity or women's lack of control over household food decisions. The mechanism doesn't match the problem.
ImpactMojoMEL Basics 101www.impactmojo.in
Theory of Change: Quality Checklist
Signs of a Strong ToC
  • Problem analysis is evidence-based and based on consultation with affected communities
  • Causal pathways are explicit — each arrow can be explained in a sentence
  • Assumptions are named, not buried
  • The mechanism of change is specified and matches the evidence base
  • Long-term impact is distinguished from near-term outcomes
  • Boundaries of the programme are acknowledged — what it won't change
  • Multiple stakeholders' roles are identified
  • The ToC can be explained clearly to community members
Red Flags in a Weak ToC
  • Outputs are labelled as outcomes ("200 women trained" as a change outcome)
  • The ToC is a diagram with boxes and arrows but no explanatory text
  • Assumptions section is blank or says "no major assumptions"
  • The pathway from individual change to social/systemic change is missing
  • No time horizon specified for outcomes vs impact
  • The same ToC applies regardless of context — no adaptation for different geographies
  • Nobody on the programme team can explain it in plain language
The living ToC: A ToC should be revisited at least annually — not treated as a fixed document. When monitoring data shows that a step in the chain isn't working as expected, the ToC either needs to be updated (the theory was wrong) or the programme design needs to change (the theory is right but the implementation isn't achieving the prerequisite conditions).
ImpactMojoMEL Basics 101www.impactmojo.in
03
Section Three
Results Frameworks & Logframes
ImpactMojoMEL Basics 101www.impactmojo.in
The Logical Framework: Structure and How to Read It
The Logframe (Logical Framework Matrix) is a 4×4 planning and monitoring tool developed by USAID in 1969. It remains the most widely used planning format in international development — and the most widely misused. Understanding its structure is essential before knowing its limits.
HierarchyIndicatorsMeans of VerificationAssumptions
Goal / Impact
Long-term change
How we know goal is achievedData sources for goal indicatorsConditions beyond programme for goal to follow from purpose
Purpose / Outcome
Why we're doing this
How we know purpose is achievedData sources for purpose indicatorsConditions beyond programme for purpose to contribute to goal
Outputs
What we produce
How we know outputs are achievedData sources for output indicatorsConditions beyond programme for outputs to lead to purpose
Activities
What we do
Inputs requiredBudget linesConditions beyond programme for activities to produce outputs
The "if-and" logic: The logframe has vertical logic (if you do the activities AND assumptions hold, outputs are produced; if outputs are produced AND assumptions hold, purpose is achieved) and horizontal logic (purpose indicators with means of verification in the same row). The assumptions column is where the logic becomes testable — and where most logframes go wrong by leaving it blank.
Classic logframe errors: (1) Activities listed as outputs; (2) Outputs listed as outcomes; (3) Assumptions column blank or saying "favourable policy environment" without specifics; (4) Indicators with no means of verification; (5) Same indicator at multiple levels; (6) More than 5–7 indicators per level (indicator proliferation); (7) The logframe bears no relationship to the programme ToC.
ImpactMojoMEL Basics 101www.impactmojo.in
Results Frameworks: Beyond the Logframe
A Results Framework is a broader term for any structured representation of expected programme results. Different donors and organisations use different formats. Understanding the common ones allows practitioners to translate between them and choose the right tool for the context.
FrameworkUsed ByKey Feature
LogframeEU, DFID/FCDO (older), most bilateral donors4×4 matrix; vertical and horizontal logic; assumptions column
Results Framework (RF)USAIDHierarchical results pyramid; strategic objectives, intermediate results; performance management plan
Performance FrameworkUSAID/PEPFARIndicator-focused; progress vs target tracking; less causal theory
Impact MapSocial enterprise; SROIMaps from activities to outputs to outcomes to impacts; used for Social Return on Investment
Outcome HarvestingComplex, systems programmesIdentifies outcomes that have occurred; works backwards to contribution; no pre-specified indicators
USAID vs FCDO logics: USAID's results framework separates strategic objectives (SOs) from intermediate results (IRs) in a tree structure rather than a matrix. The logic is similar — but the terminology and visual format differ. FCDO's newer programme approaches often use Theory of Change diagrams with Outcome Harvesting rather than traditional logframes. Most practitioners need to be fluent in at least two of these systems.
Indicator fatigue: Large development programmes — particularly government or multi-donor — can end up with 50–100 indicators across a results framework. This is unmanageable. The purpose of the framework is defeated — teams spend all their time collecting data for compliance reporting and have no time to analyse or learn. The discipline of limiting indicators to the essential is one of the most important MEL skills.
ImpactMojoMEL Basics 101www.impactmojo.in
From Framework to Implementation: Workplans and Milestones
A results framework tells you what to achieve. A workplan tells you how and when. Milestones are the intermediate checkpoints that indicate whether the programme is on track. Good workplanning is the bridge between strategic planning and operational reality.
Annual workplan vs MEL plan: These are different documents that must be aligned. The workplan specifies activities, responsible staff, timelines, and budgets. The MEL plan specifies what data will be collected, when, by whom, using what tools, and how it will be analysed. When they are developed separately — as often happens — the MEL plan ends up collecting data nobody uses and missing data the workplan needs.
Milestone vs target: A milestone is a qualitative marker of progress — "agricultural groups formed and functional" by month 6. A target is a quantitative level for an indicator — "200 farmers trained" by month 6. Both are needed. Programmes that only set targets can meet their numbers while missing the qualitative preconditions for outcome achievement.
A Realistic Workplan: What It Includes
  • Activity-level detail: Sub-activities, not just activity names. "Training" is not enough — specify preparation, participant recruitment, delivery, follow-up, and data collection from training
  • Resource linkages: Each activity linked to budget line and responsible staff — not just a name but a designated person who knows they are responsible
  • Realistic timelines: Built with input from field staff, not by headquarters. Field realities (monsoon season, harvest period, festival calendar) must be factored in
  • MEL data collection events: Baseline survey, midline, endline, FGDs, case studies — all in the workplan with timelines and budgets, not afterthoughts
  • Review and adaptation points: Quarterly review meetings, progress against milestones, documented decisions — not just monthly financial reporting
ImpactMojoMEL Basics 101www.impactmojo.in
Critiquing the Logframe: Why It Doesn't Fit All Contexts
The logframe has been criticised since its introduction. Its dominance in development reflects donor power rather than technical superiority — organisations use it because funders require it, not because it is the best tool for complex social change. Understanding the critique helps practitioners use it better and know when to supplement it.
  • Linear causality assumed: Logframes assume activities lead predictably to outcomes. Real development is non-linear, context-dependent, and emergent.
  • Designed for infrastructure projects: Originally designed for discrete, bounded construction projects — not for behaviour change, governance, or systems work.
  • Assumes pre-specified outcomes: Social change programmes often don't know what outcomes will emerge. Requiring pre-specified indicators creates perverse incentives to only measure what was planned.
  • Inhibits adaptation: Once a logframe is agreed with a donor, changing it requires formal amendment processes — creating institutional resistance to adaptive management.
  • Concentrates power: Logframes are typically designed by external consultants and headquartered staff, not communities. They embed external assumptions about what change should look like.
When Logframes Work and When They Don't
Logframes work well for
Service delivery programmes; well-understood causal pathways; discrete outputs; large-scale programmes needing standardised data; multi-country programmes requiring comparability
Logframes work poorly for
Advocacy and governance; complex adaptive programmes; community-led initiatives; systems change; innovation programmes; programmes in fragile/conflict-affected contexts
The hybrid approach: Many experienced practitioners use a logframe for donor compliance while running a parallel, richer MEL system (with outcome harvesting, most significant change, or developmental evaluation) for genuine learning. The logframe satisfies the funder; the richer system informs the team. This dual-track approach is pragmatic but resource-intensive.
ImpactMojoMEL Basics 101www.impactmojo.in
The Results Matrix: A More Flexible Planning Tool
For programmes that find the logframe too rigid, the Results Matrix (or Results Measurement Framework) offers a more flexible structure — keeping the discipline of a results chain while allowing more nuance in how outcomes are described and measured. It is not a replacement for a ToC but an operationalisation of it.
Result LevelStatementIndicatorBaselineTargetData SourceTimeline
ImpactReduced rural poverty in target districts% HHs below poverty line42%32%Household surveyYear 5
OutcomeImproved agricultural incomeAvg annual HH farm income (₹)₹42,000₹58,000Household surveyYear 3
OutputFarmers adopt climate-resilient varieties% farmers using improved seed18%60%Adoption surveyYear 2
ActivityTraining and demo plots establishedNo. of demo plots operational050Field registerYear 1
The baseline column: One of the most commonly neglected elements in results matrices. Without a baseline — a measurement of the indicator's starting value before the programme — you cannot know whether change occurred during the programme. Baseline data collection must be planned before programme start and budgeted accordingly. Post-hoc baseline reconstruction is methodologically unreliable.
Target-setting discipline: Targets must be based on evidence and realistic assessment — not aspirational numbers chosen to look ambitious in a proposal. Targets that are met too easily signal they were set too low. Targets never met signal they were unrealistic or the programme model was wrong. Both undermine credibility and learning. Consult field staff, review comparable programme data, and build in target revision at midterm.
ImpactMojoMEL Basics 101www.impactmojo.in
Baselines: Why They Are Non-Negotiable
Baseline
A measurement of outcome and impact indicators taken before the programme begins (or at the earliest feasible point). Establishes the starting value against which change will be measured. Without a baseline, you can measure the endpoint but not the change — so you cannot know whether the programme caused the measured condition or inherited it.
Before
Baseline must be collected BEFORE the programme begins. Collecting it 6 months in means the programme has already started changing things.
Same method
Baseline and endline must use identical instruments, sampling methods, and data collection protocols — or the comparison is invalid.
Baseline Design Considerations
  • What to measure: Baseline should cover all outcome-level indicators in the results framework — not just outputs. Many programmes baseline only on output-level variables and then cannot measure outcome change.
  • Comparison population: If the evaluation design involves a comparison group (non-beneficiaries), baseline must be collected from both groups simultaneously. Do not baseline the control group later — conditions change.
  • Disaggregation: Baseline data must be collected with the same disaggregation planned for endline — gender, age, caste, geography. If the endline will report on women's outcomes separately, the baseline needs women's data separately.
  • Sample size: Baseline sample must be large enough to detect the expected change at endline with statistical power. Under-powered baselines lead to endlines that show no significant change even when real change occurred.
  • Administrative data: Where primary data collection is not possible, administrative data (government MIS, health records, school enrollment data) can provide baseline — but quality must be assessed and limitations documented.
ImpactMojoMEL Basics 101www.impactmojo.in
Results Frameworks: Practical Checklist
ElementQuality StandardCommon Failure
Theory of ChangeExplicit, evidence-based, tested with field team and communityMissing entirely; or logframe passed off as ToC
Levels of resultsClear distinction between activity, output, outcome, impact with appropriate indicators at eachActivities mislabelled as outputs; outputs as outcomes
Indicators5–7 per level maximum; SMART; gender-disaggregated; based on evidence50+ indicators; unmeasurable; activity counts at outcome level
BaselineCollected before programme start; same sampling and tools as endline; disaggregatedNot collected; collected late; not disaggregated
TargetsEvidence-based; realistic; midterm and endline; includes decline scenariosAspirational; no midterm; no process for revision
Means of verificationSpecific, accessible, reliable data source named for each indicator"Field reports" or "project records" — not specific enough
AssumptionsNamed; testable; with monitoring plan for critical assumptionsBlank; or generic ("stable political context")
The living framework: A results framework designed at proposal stage must be treated as provisional. Baseline findings, implementation realities, and context changes should trigger a formal framework review. Building in a midterm review process is the most important institutional safeguard against frameworks that become irrelevant to the programmes they describe.
ImpactMojoMEL Basics 101www.impactmojo.in
04
Section Four
Indicators: Designing & Selecting
ImpactMojoMEL Basics 101www.impactmojo.in
What Makes a Good Indicator
Indicator
A variable that provides a measurable representation of a result. It tells you what to observe to know whether a result has been achieved. An indicator is not the result itself — it is a proxy for it. The quality of your measurement is only as good as the quality of your indicators.
SMART Indicators
S
Specific
Precisely defined — what is being measured, in whom, where, when
M
Measurable
Can be observed and quantified (or verified qualitatively) reliably
A
Achievable
The target level is realistic given programme scope and timeline
R
Relevant
Directly linked to the result it is meant to measure — not a proxy for a proxy
T
Time-bound
Measured at a specific point in time or over a defined period
Indicator Anatomy: What Must Be Specified
  • Indicator name: "% of women with access to credit" — this is not specific enough. Does "access" mean applied? Approved? Disbursed? Used?
  • Definition: Precise operational definition of each term in the indicator. "Women" — all women in the target area? Only those enrolled in the programme?
  • Numerator and denominator: For percentage indicators — exactly what goes in each. "% of farmers practising SRI" = farmers practising SRI / farmers receiving SRI training × 100
  • Disaggregation: By gender, age, caste/ethnicity, geography — specified in advance, not chosen after data is collected
  • Data source: Specific named source. "Household survey" — who collects it, how often, with what instrument?
  • Baseline value and target: Numeric values with source and date
The indicator reference sheet: Every indicator in a results framework should have a one-page reference sheet with all of the above. Without it, different field staff will collect the same indicator differently — making aggregation and comparison impossible.
ImpactMojoMEL Basics 101www.impactmojo.in
Types of Indicators: Quantitative, Qualitative, and Proxy
TypeWhat It MeasuresWhen to UseLimitation
QuantitativeNumeric counts, percentages, averages, ratesOutputs, service coverage, income, assetsDoesn't capture quality, process, or meaning
QualitativeExperiences, perceptions, processes, meaningsBehaviour change, empowerment, satisfaction, normsHarder to aggregate; requires skilled collectors; more expensive
ProxyAn observable variable used to represent a harder-to-measure conceptWhen direct measurement is not feasible — e.g. "nights slept under net" as proxy for malaria prevention behaviourProxy may not actually reflect the underlying concept accurately
ProcessQuality and fidelity of implementationUnderstanding why outcomes did or didn't occur; intervention fidelityEasy to mistake for outputs; needs clear quality standards
SentinelA few key indicators tracked continuously for early warningRapid feedback loops; detecting programme drift earlyMay miss slow-moving changes not captured by sentinel points
The mixed-methods approach: Quantitative indicators tell you how much changed; qualitative indicators tell you why, for whom, and under what conditions. A programme that only tracks quantitative indicators can miss systematic exclusion of sub-groups (who show up in aggregate numbers but not in disaggregated analysis) and can fail to understand why an outcome did or didn't occur. Most outcome-level results frameworks benefit from including at least one qualitative indicator per outcome.
Proxy indicator caution: A classic case in health: "number of ANC visits" as a proxy for improved maternal health outcomes. ANC visits are easier to count than outcomes — but the link is not automatic. If ANC services are of poor quality, attendance does not produce health benefits. Always ask: "Does this proxy actually lead to the outcome it's supposed to represent?"
ImpactMojoMEL Basics 101www.impactmojo.in
Gender-Responsive MEL: Disaggregation Is Not Enough
Gender-responsive MEL requires more than disaggregating data by sex. It requires asking different questions about different experiences, using methods that reach women, and measuring changes in gender relations and power — not just women's participation in activities.
The disaggregation fallacy: Disaggregating "% of participants who are women" is gender-sensitive reporting. Gender-responsive MEL asks: Did women and men experience the programme differently? Did women have equal access? Did the programme change women's decision-making power — or just their participation in a programme? Did women's outcomes improve at the expense of their time and unpaid care burden?
The Women's Empowerment in Agriculture Index (WEAI): Developed by IFPRI and used widely in agricultural programmes. Measures 5 domains: decisions about agriculture; access to resources; control of income; leadership; time allocation. Standard instrument available free — appropriate for smallholder agriculture programmes in India. Captures nuances of intra-household dynamics that simple "% women trained" cannot.
Gender-Responsive Indicator Examples
Standard IndicatorGender-Responsive Version
% of HHs with savings% of women with savings in their own name with independent access
% of farmers adopted improved seed% of female-headed HHs AND % of women in male-headed HHs with own decision-making on seed choice
% of children completing school% by gender; AND reasons for dropout disaggregated by gender
HH income increasedWomen's control over HH income; women's independent income; intra-HH resource allocation
Community participation in planningWomen's meaningful participation (not just attendance); women's proposals adopted in community plans
ImpactMojoMEL Basics 101www.impactmojo.in
Selecting Indicators: The Discipline of Less
Indicator selection is a discipline of subtraction. The natural tendency is to add — "let's also measure X" — without considering the data collection burden, analysis capacity, or whether X actually informs decisions. Excellent MEL systems are notable for what they choose not to measure.
The 5 test questions for any proposed indicator:
(1) Is this indicator directly linked to a result in our framework?
(2) Do we have the capacity to collect this data reliably?
(3) Will this data actually be used to make a decision?
(4) Do we already have this data from another source?
(5) What would we do differently if this indicator showed good results vs poor results?
If you can't answer all five — especially (3) and (5) — cut the indicator.
Indicator Selection Criteria
What makes organisations select indicators (survey of 200 Indian NGOs)
Source: ImpactMojo illustrative data based on sector experience — not a published study
ImpactMojoMEL Basics 101www.impactmojo.in
Standard vs Custom Indicators: When to Use Each
Standard indicators — like those from USAID's Foreign Assistance Indicator Library, WHO standard definitions, or India's HMIS — have validated definitions, established baselines, and allow comparison across programmes and geographies. Custom indicators are specific to a programme's unique theory of change. Both have a place.
Standard Indicators: Use When
Sector benchmarking is needed; donor requires them; comparable data exists; results will be used to influence sector policy; they align with your ToC outcomes
Custom Indicators: Use When
Your ToC targets a unique change pathway; standard indicators don't capture your mechanism; local context requires adapted definitions; you need to learn about something not in standard libraries
Commonly Used Standard Indicators in India Development
DomainStandard IndicatorSource
HealthInstitutional delivery rate; ANC visits ≥4; stunting prevalence (HAZ <−2)NFHS; HMIS
EducationGross/Net Enrollment Ratio; foundational literacy rate; dropout rateUDISE+; ASER
PovertyConsumption per capita (MPI); multidimensional poverty index (OPHI)NSSO; NFHS
AgricultureCrop yield (kg/acre); cropping intensity; area under irrigationState Agri Dept; ICRISAT
WASHHHs with piped water; ODF status; HHs with handwashing facility with soapNFHS; SBM MIS
ImpactMojoMEL Basics 101www.impactmojo.in
When Indicators Go Wrong: Goodhart's Law and Gaming
Goodhart's Law
"When a measure becomes a target, it ceases to be a good measure." Once an indicator is used for accountability (reporting to funders, evaluating staff performance), people optimise for the indicator — not for the underlying result the indicator was supposed to represent. The measure loses its signal value.
Classic development examples: (1) "Number of trainings conducted" as a target → teams conduct shorter, lower-quality trainings to meet numbers. (2) "% of children enrolled in school" → enrollments inflated by registering children who don't attend. (3) "# of community leaders engaged" → meetings held with cooperative leaders rather than marginalised voices. In each case, the indicator target was met; the underlying objective was not.
How to Reduce Gaming Risk
  • Use indicator combinations: Track both enrolment AND attendance AND learning outcomes — gaming one of three is harder than gaming one of one
  • Include quality alongside quantity: "Number trained" paired with "% who demonstrate competency in post-training assessment" closes the gaming gap
  • Unannounced verification: Random spot-checks, beneficiary verification surveys, and third-party verification reduce data fabrication
  • Protect the data collection function: Field staff responsible for data collection should not be the same staff held accountable for the numbers — creates obvious conflict of interest
  • Make under-performance reportable without consequence: If reporting bad numbers leads to funding cuts or staff dismissal, teams will hide problems. Psychological safety for honest reporting is as important as indicator design
ImpactMojoMEL Basics 101www.impactmojo.in
Indicator Design: Quick Reference
Good Indicators Look Like This
  • "% of women in programme with independent bank account with at least ₹500 balance, disaggregated by marital status, measured at endline (18 months post-enrolment)"
  • "Average crop yield (kg/acre) for Kharif paddy, target beneficiary farmers, Year 2 post-training season, compared to Year 0 baseline"
  • "Community members (women and men, separately) who report feeling safe reporting violence to local authorities — % agree/strongly agree, endline survey"
Weak Indicators Look Like This
  • "Improved access to financial services" — what does access mean? For whom? Measured how?
  • "Agricultural productivity enhanced" — which crop? By how much? Compared to what?
  • "Community awareness raised" — what are they aware of? How is awareness verified?
Indicators by Result Level: Recommended Distribution
Source: ImpactMojo MEL guidance — industry norm for well-designed frameworks
The minimum viable indicator set: For a programme of 3–4 outputs with 2 outcomes, aim for: 2–3 impact indicators; 3–4 outcome indicators; 5–8 output indicators; 4–6 activity/process indicators. Total: 14–21. Any framework exceeding 30 indicators will likely not be used effectively. If your donor requires more, negotiate or build a parallel light-touch internal system.
ImpactMojoMEL Basics 101www.impactmojo.in
05
Section Five
Data Collection Methods
ImpactMojoMEL Basics 101www.impactmojo.in
Household Surveys: Design, Sampling, and Quality
Household surveys are the most common quantitative data collection method in development MEL. They are also one of the most error-prone — due to poor questionnaire design, inadequate sampling, untrained enumerators, and response bias. Understanding survey design basics is essential for anyone commissioning or interpreting survey-based MEL.
Questionnaire design principles: (1) Each question should measure one thing only; (2) Avoid leading questions; (3) Keep questions to a minimum — cognitive burden on respondents leads to lower quality answers over time; (4) Pilot-test with 10–20 respondents before full deployment; (5) Ensure cultural and linguistic appropriateness — especially in India's multilingual context.
Response bias in development MEL: Beneficiaries know they are being assessed by an organisation they depend on — and often give socially desirable answers. "Did the training help you?" asked by the same NGO that delivered it will produce overwhelmingly positive responses regardless of actual utility. Independent data collection, careful question framing, and anonymity assurance all reduce this bias — but never eliminate it entirely.
Sampling: The Fundamentals
Sampling MethodWhen to UseKey Requirement
Simple Random SamplingWhen complete list of population is availableComplete sampling frame (beneficiary list)
Systematic RandomLarge lists; every nth elementOrdered list; no hidden patterns every n elements
Stratified RandomWhen subgroups (women, Adivasi, different districts) must be representedKnown stratum sizes; strata must be non-overlapping
Cluster SamplingGeographically dispersed populations; efficiencyClusters (villages) sampled, then households within; increases variance
PurposiveQualitative studies; selecting information-rich casesClear criteria for selection; acknowledge non-representativeness
ImpactMojoMEL Basics 101www.impactmojo.in
Qualitative Methods: FGDs, KIIs, and Observation
Qualitative methods generate depth, context, and explanation. They answer "why" and "how" — the mechanisms that quantitative surveys cannot reach. They are not less rigorous than quantitative methods — they are rigorous in different ways, requiring skilled facilitation, systematic analysis, and careful attention to positionality.
Key Qualitative Methods
  • Focus Group Discussion (FGD): Structured group discussion (6–8 participants) using a guide. Good for exploring shared norms, community-level perceptions, and generating hypotheses. Risk: dominant voices; social desirability in group settings.
  • Key Informant Interview (KII): In-depth interview with person with specific knowledge — health worker, teacher, community leader, government official. Semi-structured guide; typically 45–90 minutes.
  • In-depth Interview (IDI): Individual interview with programme participant. More personal than KII. Allows for probing experiences, perceptions, and behaviour change processes.
  • Observation: Direct observation of programme activities, community settings, or practices. Can validate or contradict self-reported data. Field notes; time-activity studies.
FGD vs IDI: when to use which: FGDs are efficient for generating community-level norms and shared perceptions. IDIs are appropriate for sensitive topics (domestic violence, income, family planning), individual experiences, and when social desirability in a group would suppress honest responses. Never use FGDs for sensitive or potentially shame-laden topics.
Conducting FGDs in rural India: practical issues: (1) Group composition matters — mixed caste groups in many contexts will produce dominant-caste narratives; separate by caste/community; (2) Gender-segregated groups almost always produce more honest data from women; (3) Introductory sessions about confidentiality and the purpose of the discussion significantly improve quality; (4) Co-facilitation (facilitator + note-taker) allows better observation and recording; (5) Avoid venue the programme runs — creates power dynamics that suppress criticism.
ImpactMojoMEL Basics 101www.impactmojo.in
Participatory Methods: Communities as Evaluators
Participatory Rural Appraisal (PRA) and its variants give communities the tools to assess their own situations and the changes they experience. They shift the locus of evaluation from external expert to community member — valuable for accountability to communities as well as for data quality on locally relevant indicators.
MethodWhat It ProducesBest For
Participatory mappingCommunity-drawn maps of resources, risks, servicesBaseline mapping; infrastructure access; hazard exposure
Seasonal calendarAnnual pattern of workload, income, food security, illnessUnderstanding livelihood seasonality; programme timing
Trend linesCommunity assessment of change over time (10–20 years)Historical context; community-perceived change on key dimensions
Venn diagramCommunity's perception of local institutions and relationshipsPower mapping; stakeholder analysis; governance assessment
Most Significant Change (MSC)Stories of the most significant changes experiencedComplex programmes; capturing unexpected outcomes; human face of impact
Most Significant Change: Developed by Rick Davies (1996), MSC involves: (1) collecting stories of significant change from participants; (2) deliberate selection of the most significant stories by panels at different levels; (3) reporting back the stories selected and why. It produces rich, unexpected insights about programme impact — including negative and unintended impacts — that structured surveys miss entirely.
Caution on participation in MEL: Participatory methods can be co-opted into compliance reporting — communities "participate" in producing outputs that validate programme decisions already made. True participatory MEL requires communities to have genuine power over what gets measured, what counts as success, and what the findings mean. This is organisationally demanding and requires deliberate institutional commitment.
ImpactMojoMEL Basics 101www.impactmojo.in
Digital Data Collection: Tools, Trade-offs, and Indian Reality
Mobile-based data collection has transformed field MEL in India over the past decade — reducing errors, enabling real-time data review, and cutting transcription costs. But digital tools also create new constraints and risks that must be managed, particularly in low-connectivity field environments.
ToolTypeStrengthsLimitations
KoBoToolboxMobile survey (free)Offline capable; skip logic; free; widely used by NGOsLimited dashboarding; storage limits on free tier
ODK CollectMobile survey (open source)Highly customisable; open source; works offlineRequires server setup; technical capacity needed
CommCareCase management + surveysLongitudinal tracking; health worker workflowExpensive; implementation complexity
DHIS2Health information systemGovernment standard in many states; aggregate reportingNot designed for programme-level MEL
Google Forms / SheetsSurvey + databaseFree; familiar; quick deploymentNo offline; limited validation; data security concerns
India connectivity realities: 4G connectivity in field areas — Jharkhand, Chhattisgarh, eastern UP — remains unreliable. Offline-first tools (KoBoToolbox, ODK) are essential for field teams. Syncing data when in town. Do not design data collection workflows that depend on real-time connectivity if working in Tier 3+ locations.
Data quality in digital collection: Digital tools reduce transcription error but introduce new risks: (1) Enumerators completing surveys without visiting respondents ("porch surveys"); (2) GPS data showing incorrect locations because devices don't update; (3) Metadata manipulation. GPS verification of interview location, audio recording of a sample of interviews (with informed consent), and supervisor spot-checks remain essential quality controls even with digital tools.
ImpactMojoMEL Basics 101www.impactmojo.in
Administrative Data: India's Underused MEL Asset
India has one of the world's richest administrative data ecosystems — NFHS, NSSO/PLFS, UDISE+, HMIS, NCRB, census, MGNREGS MIS, PM-KISAN data, and dozens of state-level systems. For many development programmes, this data can supplement or replace expensive primary data collection for baseline and context analysis.
NFHS-5
National Family Health Survey (2019–21) — district-level data on health, nutrition, women's empowerment. Free download at rchiips.org
PLFS
Periodic Labour Force Survey — quarterly and annual employment data. State-level. Essential for livelihoods baselines.
UDISE+
School education statistics by school, block, district. Enrollment, infrastructure, teacher-pupil ratios. Annual.
HMIS
Health Management Information System — facility-level health service data. Monthly data at block level publicly available.
Using administrative data in MEL: Administrative data can provide: (1) Baseline values for standard indicators before programme start; (2) Trend context — how has the indicator moved over 5–10 years before the programme?; (3) Comparison data — how does the programme area compare to similar non-programme areas?; (4) Corroboration — does primary data align with administrative data trends? Divergence signals either programme effect or data quality issues.
Limitations of administrative data: Coverage gaps in remote areas; institutional incentives to over-report (MGNREGS employment days; school attendance); significant data lags (NFHS is 3+ years old by time of release); limited disaggregation below district level for most surveys; measurement definitions that may not align with programme indicators. Use with knowledge of these limitations, not as ground truth.
ImpactMojoMEL Basics 101www.impactmojo.in
Data Quality: The Unglamorous Foundation of Good MEL
Data quality is the most underinvested aspect of development MEL. Organisations spend significant resources collecting data, then use it without questioning its reliability or validity. Conclusions drawn from poor-quality data are worse than having no data — they create false confidence.
Five Dimensions of Data Quality (USAID DQA Framework)
Validity
Does the data actually measure what it claims to measure?
Reliability
Would the same measurement give the same result if repeated by different people?
Timeliness
Is the data current enough to inform the decision it's intended for?
Precision
Is the measurement precise enough for the decisions being made?
Integrity
Has the data been protected from manipulation or bias?
Data Quality Assurance in Practice
  • Enumerator training: Comprehensive training before every survey; inter-rater reliability testing; supervised practice interviews
  • Back-checks: Re-interview 5–10% of surveyed households by supervisor within 48 hours — compare key responses for consistency
  • Range checks: Built into digital survey tools — flag responses outside expected ranges (household income above 10x median; age of "child" listed as 60)
  • Completeness checks: Ensure no skip-pattern violations; no systematically missing fields
  • Data Quality Assessment (DQA): Formal review of MEL systems — USAID requires DQAs every 2 years for funded projects. Involves tracing data from field record to report to check integrity and completeness at each step.
ImpactMojoMEL Basics 101www.impactmojo.in
Ethics in Data Collection: Informed Consent, Privacy, and Do No Harm
MEL data collection involves collecting personal information from vulnerable people — often in situations where they have limited power to refuse and limited understanding of how their data will be used. Ethical data collection is not bureaucratic compliance — it is a fundamental obligation to the communities development programmes claim to serve.
Informed Consent
Before collecting data, respondents must: (1) understand the purpose of the study; (2) know they can refuse or stop without consequences; (3) understand how their data will be used and stored; (4) know who will have access. Consent must be genuinely informed and voluntary — not extracted through power relationships or false assurances.
Ethical Principles for Development MEL
  • Voluntary participation: Never imply (even indirectly) that programme benefits depend on participation in surveys. This is coercive and invalidates the consent.
  • Confidentiality: Individual data should not be attributable to named individuals in reports. Disaggregated small-cell data (e.g. 2 Dalit women in a group of 50) may be identifiable even without names.
  • Sensitive topics: Data on domestic violence, HIV status, income, and caste require special precautions — separate interviews, female enumerators for women, clear explanation of limits of confidentiality.
  • Data storage: Respondent-level data should be stored securely, access-restricted, and deleted when no longer needed. Under India's DPDP Act 2023, organisations have legal obligations around personal data processing.
  • Child data: Assent from children AND consent from parents/guardians. Extra precautions for data on child abuse, health, or identity.
  • Feedback to communities: Share findings with the communities who provided data. Withholding results from data subjects is an extractive practice.
ImpactMojoMEL Basics 101www.impactmojo.in
Data Management: Storing, Cleaning, and Analysing
Data collected well but stored poorly, cleaned inconsistently, or analysed superficially produces conclusions as unreliable as data collected badly. Data management is the unglamorous infrastructure of good MEL.
1
Data entry / import: Digital (KoBoToolbox export) or paper-to-digital. Double-entry for critical data.
2
Data cleaning: Range checks, logic checks, duplicate removal, missing data handling, outlier review
3
Variable construction: Computing indicators from raw data — score indices, disaggregations, derived variables
4
Analysis: Descriptive statistics; before-after comparisons; subgroup analysis; qualitative thematic coding
5
Storage and documentation: Raw data retained; cleaning log; codebook; version control
Data cleaning is analysis: The decisions made during cleaning — how to handle missing data, what counts as an outlier, how to code qualitative responses — are substantive analytical choices that affect conclusions. Document every cleaning decision. Share the cleaning log with evaluation users. "Raw data never lies but cleaned data always reflects assumptions."
Tools for analysis in Indian NGO contexts: Excel is the reality for most mid-size organisations — used well (pivot tables, COUNTIF, data validation) it handles most MEL analysis needs. SPSS is common in academic research collaborations. Increasingly, R and Python are used in larger NGOs with MEL specialists. The tool matters less than the analytical rigour and documentation.
ImpactMojoMEL Basics 101www.impactmojo.in
06
Section Six
Evaluation Design
ImpactMojoMEL Basics 101www.impactmojo.in
Types of Evaluation: Choosing the Right Question
Evaluation is not one thing — it encompasses very different questions requiring different designs and methods. Choosing the right type of evaluation starts with identifying the primary question the evaluation should answer — not with choosing a methodology.
TypePrimary QuestionWhen
FormativeIs the programme design right? Are we reaching the right people? What needs to be adjusted?Early implementation; piloting
Process / ImplementationIs the programme being implemented as designed? With what quality?Mid-implementation; fidelity assessment
OutcomeDid the programme achieve its outcomes? For whom?Near or after completion
ImpactDid the programme cause the outcomes? What would have happened without it?After completion; requires counterfactual
EconomicWas the programme cost-effective? What is its cost per unit of outcome?With cost data; comparison available
DevelopmentalAre we learning enough to adapt? Is the ToC still valid?Complex, long-term programmes
Summative vs Formative: Formative evaluation asks "how can we improve?" — and findings are used to change the programme. Summative evaluation asks "did it work?" — and findings inform decisions about whether to continue, scale, or end. Most development evaluations try to do both at once — producing reports that are neither genuinely formative (too late to act on) nor genuinely summative (not rigorous enough to make claims about causation).
The midterm evaluation: One of the most valuable and least well-executed evaluation types. Commissioned at programme midpoint — should answer: Is implementation on track? Are early-stage outcomes appearing as expected? What course corrections are needed? Too often, midterms are mini-endlines — documenting progress rather than challenging design assumptions. The most useful midterms are adversarial — actively looking for evidence that the ToC is wrong.
ImpactMojoMEL Basics 101www.impactmojo.in
Randomised Controlled Trials: Gold Standard or Overrated?
The Randomised Controlled Trial (RCT) is the most robust design for establishing causal attribution — whether the programme caused an observed change. Random assignment to treatment and control groups makes the two groups statistically equivalent, so differences in outcomes can be attributed to the intervention. Abhijit Banerjee, Esther Duflo, and Michael Kremer won the 2019 Nobel Prize in Economics for their work applying RCTs to poverty reduction.
Counterfactual
What would have happened to programme participants if they had not received the intervention? The fundamental question in impact evaluation. The control group in an RCT provides the counterfactual — they are identical to the treatment group in expectation (due to randomisation) but did not receive the intervention.
RCTs: Strengths and Significant Limitations
Strengths
Best design for causal attribution; controls for confounders; clear counterfactual; replicable; generates sector knowledge
Limitations
Expensive; requires large sample; ethical concerns about control groups; poor external validity; answers "does it work on average" not "for whom under what conditions"; doesn't explain mechanisms
The external validity problem: An RCT shows whether an intervention worked for a specific population in a specific context. Whether that finding generalises to other populations, geographies, or scale is an entirely separate question — and one that RCTs alone cannot answer. The replication crisis in development economics (many findings that didn't replicate at scale or in different contexts) reflects this fundamental limitation. See the critique from Angus Deaton and Nancy Cartwright (2018).
ImpactMojoMEL Basics 101www.impactmojo.in
Quasi-Experimental Designs: Rigour Without Randomisation
Most development programmes cannot be randomised — it is unethical to withhold a potentially beneficial programme from a randomly assigned control group. Quasi-experimental designs use statistical or natural variation to construct a credible counterfactual without randomisation.
DesignMechanismRequirement
Difference-in-Differences (DiD)Compare change in treatment vs control group over time — the "double difference"Parallel trends assumption; baseline data for both groups
Regression Discontinuity (RDD)Compare just above and below an eligibility threshold — treat threshold as quasi-randomSharp eligibility cutoff; running variable measurable
Instrumental Variables (IV)Find a variable that predicts programme participation but doesn't directly affect outcomeValid instrument (rare); strong first stage
Propensity Score Matching (PSM)Match treated units with untreated units with similar observable characteristicsAll confounders measured; strong on observables
Synthetic ControlConstruct a weighted "synthetic" comparison from multiple control unitsPre-treatment data for treatment and potential controls; few treated units
DiD in practice: Difference-in-Differences is the most commonly used quasi-experimental method in development evaluations. If a programme is rolled out in Phase 1 districts and Phase 2 districts (the Phase 2 group serving as controls during Phase 1), DiD estimates the programme effect as: (change in Phase 1 outcomes) minus (change in Phase 2 outcomes over the same period). The critical assumption — parallel trends — means the two groups would have changed similarly without the programme.
For practitioners — the key message: You don't need to implement quasi-experimental methods yourself, but you need to understand them well enough to: (1) recognise when an evaluation is claiming to show causal impact without sufficient design rigour; (2) commission evaluations that include appropriate counterfactual designs; (3) interpret findings correctly — knowing whether a result is a correlation or a causal estimate.
ImpactMojoMEL Basics 101www.impactmojo.in
Non-Experimental Evaluation: Theory-Based and Qualitative Approaches
Most development evaluations in India and globally are non-experimental — they cannot establish a counterfactual. They can still be rigorous, useful, and honest about what they can and cannot claim. The key is matching the evaluation design to the evaluation question — not defaulting to quantitative methods that imply causal claims they can't support.
Pre-post design (before-after): Most common non-experimental design. Measure outcomes before the programme (baseline) and after (endline). The change is attributed to the programme. The fatal flaw: cannot distinguish programme effects from secular trends, seasonal variation, or other external changes. A village's income may have risen for reasons entirely unrelated to the programme.
Process tracing: A theory-based evaluation method. Looks for observable evidence consistent with the causal mechanism claimed in the ToC — and for evidence that would rule out alternative explanations. If the ToC says "training leads to knowledge change which leads to behaviour change," process tracing looks for evidence of knowledge change in the right timeframe, then behaviour change, then tests whether knowledge change predicted behaviour change at individual level.
Contribution Analysis: An Approach for Complex Programmes
Contribution Analysis (John Mayne) is designed for complex programmes where attribution is not possible. It builds a "contribution story" — a plausible, evidence-based narrative that the programme contributed to observed change, acknowledging other factors.
1
State the ToC and what evidence would support or undermine it
2
Collect evidence on outcomes actually observed
3
Assess whether observed outcomes match the ToC's predictions
4
Identify and assess alternative explanations
5
Build and communicate the contribution story with confidence levels
ImpactMojoMEL Basics 101www.impactmojo.in
Commissioning an Evaluation: What to Include in a ToR
Terms of Reference (ToR) for an evaluation are one of the most important documents in the MEL process. A well-written ToR produces relevant, useful evaluation findings. A poorly written ToR produces a report that costs significant resources and answers nobody's actual questions.
Essential Elements of a Good Evaluation ToR
  • Programme description and context: Theory of change, target population, geography, budget, timeline, what has been implemented so far
  • Evaluation purpose: Why is this evaluation being done? Who will use the findings and for what decisions?
  • Evaluation questions: Specific, answerable questions — not vague objectives. Maximum 4–6 questions.
  • Evaluation design: What approach is expected — or left to the evaluator to propose? What counterfactual design is feasible?
  • Data requirements: What existing data is available; what primary data is needed; any excluded methods
  • Deliverables and timeline: Inception report, draft, final report, presentation — with specific dates
The evaluation question must be specific: "Evaluate the programme's effectiveness" is not an evaluation question. These are: "To what extent did women in the programme increase their decision-making power over household finances compared to the baseline?" or "What implementation factors explain the difference in adoption rates between high-performing and low-performing districts?" Each is specific, answerable, and linked to a decision.
Internal vs external evaluation: Internal evaluation (by programme staff) has better access to data and context; lower cost; often more practical in recommendations. Risk: bias and lack of independence. External evaluation (by consultants) brings independence and methodological expertise; seen as more credible by donors. Risk: superficiality; context ignorance; extractive. Best practice: external evaluator with genuine partnership with internal MEL team throughout the process.
ImpactMojoMEL Basics 101www.impactmojo.in
Evaluation Design Decision Guide
Evaluation Methods by Rigour and Feasibility for Typical Indian NGO Contexts
Source: ImpactMojo MEL guidance — illustrative positioning based on resource requirements and methodological strength
Matching Design to Question and Resources
If your primary question is...Consider...
What happened and did we reach our targets?Pre-post design; quantitative survey + qualitative
Did we cause this change?RCT (if feasible) or quasi-experimental (DiD, RDD)
Why did/didn't outcomes occur?Process evaluation; qualitative methods; contribution analysis
Is our ToC still valid?Theory-based evaluation; outcome harvesting
What's the value for money?Cost-effectiveness or cost-benefit analysis
How can we improve this year?Formative evaluation; rapid qualitative assessment
Most important MEL decision: Choose your evaluation question before your method. The most common evaluation failure — using a pre-post quantitative survey regardless of the question — produces expensive data that cannot answer whether the programme caused anything or why outcomes did or didn't occur. Methodology follows question.
ImpactMojoMEL Basics 101www.impactmojo.in
07
Section Seven
Attribution & Contribution
ImpactMojoMEL Basics 101www.impactmojo.in
The Attribution Problem: Why Causation Is Hard in Development
Attribution
The degree to which observed outcomes can be ascribed to the programme, rather than to other factors (external trends, other actors, seasonality, maturation effects). Full attribution requires knowing what would have happened in the absence of the programme — the counterfactual. In most development contexts, this is impossible to know with certainty.
The attribution gap in practice: A women's livelihoods programme reports that beneficiary household incomes rose 35% over three years. Is that the programme? A national economic upswing? Rising commodity prices? A government scheme running in the same area? The fact that incomes rose does not tell us why they rose. Without a counterfactual — data on what happened to comparable non-beneficiaries over the same period — the 35% figure is a correlation, not a programme effect.
The Attribution Spectrum
High
RCT with large sample
Random assignment eliminates selection bias. Strongest causal claim. Expensive and often infeasible.
Med+
DiD, RDD, IV
Quasi-experimental. Credible causal claim if assumptions hold. Requires strong design discipline.
Med
Matched comparison
PSM; comparison villages with similar characteristics. Weaker — only controls for observable confounders.
Low
Pre-post without comparison
Cannot rule out external factors. Shows change occurred; cannot attribute it.
ImpactMojoMEL Basics 101www.impactmojo.in
Contribution vs Attribution: Reframing the Question
For most development programmes — especially those addressing complex, multi-causal problems — the honest question is not "did we cause this?" but "what was our contribution to this outcome, alongside other factors?" Contribution is not a lesser standard than attribution; it is a more honest one for the contexts development programmes operate in.
The question is not whether the programme made a difference but whether it made more of a difference than alternative uses of the same resources.
Paraphrasing John Mayne — contribution analysis framing
Making the Contribution Case: What Evidence Is Needed
  • The ToC held: Evidence that the causal pathway specified in the ToC actually played out — the intermediate steps happened in the right sequence
  • Other explanations addressed: Key alternative explanations investigated and ruled out or assessed for their relative importance
  • Beneficiary attribution: The people who changed attribute the change (at least partly) to the programme — not to something else
  • Dose-response: Those who received more of the intervention show more change — this is consistent with a causal story even without a control group
  • Timing: Change appeared after the programme started, not before — and in areas where the programme operated, not in areas where it didn't
Honest communication: A contribution claim should be stated as such: "The evidence suggests the programme was a significant contributing factor to observed income improvements, alongside favourable commodity prices and improved road access." This is more credible than claiming full attribution — and more useful for learning.
ImpactMojoMEL Basics 101www.impactmojo.in
Realist Evaluation: What Works for Whom in What Circumstances
Realist evaluation, developed by Ray Pawson and Nick Tilley, rejects the simple "did it work?" question. Instead, it asks: "What works, for whom, in what circumstances, and why?" It is particularly useful when a programme works in some contexts and not others — and you need to understand why.
CMO Configuration
Context + Mechanism + Outcome. Realist evaluation seeks to identify which contexts activate which mechanisms to produce which outcomes. The same intervention can produce different outcomes in different contexts — because context determines whether the mechanism fires. Understanding the CMO configuration is how you build transferable knowledge about what works and when.
Realist example: A microfinance programme works well in one district (high outcome) and poorly in another (low outcome). A standard evaluation would average the two and conclude "moderate impact." Realist evaluation asks: what is different between the contexts? It finds: in the high-outcome district, women's self-help groups already existed — the programme activated a peer-accountability mechanism that was absent in the low-outcome district. The mechanism (peer accountability) was present in one context and absent in the other. This is actionable learning.
Practical value for Indian practitioners: India's diversity — across caste, language, agro-ecology, governance quality, and social norms — means context variation is extremely high. A programme that works in Tamil Nadu may fail in UP. Realist evaluation provides the analytical framework to understand and document why — and to build a knowledge base that allows programmes to be adapted to different contexts rather than failing silently.
ImpactMojoMEL Basics 101www.impactmojo.in
Systems Change Evaluation: Measuring What Logframes Cannot
Programmes aimed at systems change — shifting policies, markets, norms, or institutional arrangements — cannot be evaluated with standard logframe-and-indicator approaches. The change they seek is not in individual outcomes but in the system that produces those outcomes. Evaluating systems change requires different questions, methods, and timelines.
What does systems change look like? Policies changed; market rules that were unjust no longer apply; norms that excluded certain groups no longer dominate; institutions that failed marginalised people now serve them. These are changes in the architecture of a system — not in individual behaviour. Measuring them requires tracing policy change, mapping power shifts, and assessing whether the system itself is more equitable or resilient.
Outcome Harvesting for systems change: Developed by Ricardo Wilson-Grau. Rather than measuring pre-specified indicators, Outcome Harvesting collects "outcomes" — changes in the behaviour, relationships, policies, or practices of actors — then works backwards to assess the programme's contribution. It captures emergent, unexpected, and systemic changes that standard evaluation misses. Used by Ford Foundation, Open Society, and many advocacy-focused programmes.
Systems Change Indicators: Examples
SystemChange TypeIndicator Approach
PolicyNew legislation enacted; policy implemented with budgetPolicy mapping; implementation tracker; budget analysis
MarketNew market actors, rules, or norms emergeDCED standards; market system mapping at baseline and endline
NormsShift in what is considered acceptable behaviourSocial norms measurement tools (EMERGE); longitudinal qualitative
PowerMarginalised groups have greater voice and agencyPower analysis frameworks; Most Significant Change; network analysis
InstitutionalGovernment systems deliver for excluded groupsService delivery scorecards; beneficiary feedback; PEFA assessments
ImpactMojoMEL Basics 101www.impactmojo.in
Cost-Effectiveness Analysis: Getting to Value for Money
Cost-effectiveness analysis (CEA) asks: how much did it cost to achieve one unit of outcome? Combined with impact evidence, it allows comparison across programmes and informs resource allocation decisions. It is increasingly demanded by donors — but rarely done well.
CEA
Cost-Effectiveness Analysis: cost per unit of outcome. E.g. cost per child prevented from dropping out of school. Compares efficiency without monetising outcomes.
CBA
Cost-Benefit Analysis: monetises outcomes and compares to costs. Requires strong assumptions about value of outcomes. More powerful but harder to do credibly.
Calculating cost per outcome: Total programme cost (including staff time, overheads, capital costs — not just direct activity costs) ÷ number of outcome units achieved. If a programme spends ₹2 crore and 500 women complete vocational training and are employed after 1 year: cost per employed woman = ₹40,000. Is this good value? Only answerable by comparison to alternatives.
Common CEA Errors and How to Avoid Them
  • Incomplete cost capture: Using only direct activity costs and ignoring staff time, management overhead, and capital — produces artificially low cost-per-outcome figures
  • Using outputs not outcomes: "Cost per training session" is not CEA. Cost per sustained behaviour change is.
  • Ignoring programme quality: A cheap programme with poor quality outcomes is not more cost-effective than a more expensive one with durable outcomes
  • No comparison benchmark: A cost figure means nothing without a reference point — what does the same outcome cost through government schemes, or through alternative NGO approaches?
GiveWell and the accountability shift: GiveWell's rigorous CEA of global health charities has moved significant philanthropic funding towards cost-effective interventions (bed nets, vitamin A supplementation, direct cash transfers). The same pressure is now moving into Indian philanthropy — donors asking "how does your cost-per-outcome compare to the best alternative?"
ImpactMojoMEL Basics 101www.impactmojo.in
OECD-DAC Evaluation Criteria: The Industry Standard
The OECD Development Assistance Committee (DAC) evaluation criteria are the most widely used framework for structuring development evaluation questions. Originally five criteria, a sixth — coherence — was added in 2019. Understanding them allows practitioners to structure evaluation questions and communicate findings using internationally recognised language.
Relevance
Is the programme addressing the right problem, for the right people, in the right way given the context?
Coherence
Does the programme fit with other interventions by the same organisation and by other actors? Are there synergies or contradictions?
Effectiveness
Did the programme achieve its intended outcomes and impact? To what extent?
Efficiency
Were inputs (money, time, staff) used well to achieve outcomes? Could the same results have been achieved with less?
Impact
What are the broader, longer-term effects — positive and negative, intended and unintended — on people and systems?
Sustainability
Will the benefits continue after the programme ends? Are changes institutionalised? Is the system stronger?
Using OECD-DAC in ToR: An evaluation ToR will often structure evaluation questions under each criterion: "Under Effectiveness, the evaluation will assess: to what extent did the programme achieve its outcome targets?" This provides international comparability and a complete assessment framework. But practitioners should note: not every evaluation needs to address all six — prioritise based on the decision being made.
Sustainability: the most neglected criterion: Evaluations consistently find that sustainability — whether programme gains are maintained after the programme ends — is the weakest element of most development programmes. Infrastructure built by NGOs deteriorates without maintenance systems. Behaviour changes revert when support structures are removed. Learning that was never institutionalised disappears with staff turnover. A programme that achieves its outcomes but produces nothing durable is fundamentally incomplete.
ImpactMojoMEL Basics 101www.impactmojo.in
Unintended Effects: What Programmes Don't Plan For
All programmes produce unintended effects — positive and negative. Positive unintended effects are often ignored in reporting because they weren't planned. Negative unintended effects are often actively concealed because they undermine the programme narrative. Rigorous MEL actively looks for both — because they contain the most valuable learning.
TypeDevelopment Examples
Positive unintendedWomen's SHG programme improves men's alcohol reduction (peer pressure from wives); school feeding programme improves sibling enrolment (family strategy)
Negative unintendedMicrofinance increases women's access to credit but also increases domestic violence when husbands lose control of finance; WASH programme builds toilets but women still use open spaces (safety at night)
DisplacementProgramme provides jobs in target area; unemployed from neighbouring areas migrate in, displacing programme beneficiaries
DependencyEmergency food distribution over multiple years reduces households' agricultural investment — because they anticipate free food
Elite captureCommunity-based targeting of benefits is captured by local elites who control access; most marginalised excluded
Do No Harm: Borrowed from medical ethics, "do no harm" in development MEL means systematically assessing whether the programme is producing negative effects on any sub-group — and designing MEL systems to detect these. Programmes that only measure their intended targets cannot see the harm they may be causing to excluded or differently-affected groups.
How to detect unintended effects: (1) Open-ended questions in endline surveys: "What, if anything, about the programme has been harmful or problematic for you or your family?"; (2) FGDs designed to surface negative experiences (requires significant trust and careful facilitation); (3) Outcome Harvesting which collects all significant changes, not just positive ones; (4) Complaints and feedback mechanisms that are actually functional and safe to use.
ImpactMojoMEL Basics 101www.impactmojo.in
Attribution & Contribution: Key Takeaways for Practitioners
What Good Practice Looks Like
  • Be explicit about the level of causal claim you can make given your evaluation design — don't overstate
  • Use contribution framing when full attribution is not possible — it is honest, not weak
  • Design MEL systems to detect unintended effects — positive and negative
  • Consider realist questions (for whom, in what circumstances) when interpreting results
  • Track sustainability from the start — not as an endline afterthought
  • Build cost data collection into programme operations — so CEA is possible without a separate costing exercise
On overstatement: The development sector's credibility problem is substantially a product of overclaimed attribution — programmes presenting correlation as causation, before-after change as programme effect, and output counts as impact. Funders have become appropriately sceptical. Rigorous, honest causal claims — even when modest — are more credible and more useful than inflated ones.
Common Errors to Avoid
  • "Beneficiaries reported improved incomes" — self-report without baseline or counterfactual is not impact evidence
  • Presenting before-after change as the programme effect without acknowledging secular trends or external factors
  • Reporting the best-performing sub-group results as overall programme results
  • Attributing all observed change to your programme when other actors were working in the same area
  • Not reporting negative or null findings because they are inconvenient for donor reporting
  • Conducting CEA using direct activity costs only, excluding staff and overhead
The sector norm you can change: When you report findings honestly — including what didn't work, what you're uncertain about, and what may have caused harm — you signal to your funders that your positive findings are credible. The organisation that only reports success is not credible. The organisation that reports honestly is.
ImpactMojoMEL Basics 101www.impactmojo.in
08
Section Eight
Learning Systems & Adaptive Management
ImpactMojoMEL Basics 101www.impactmojo.in
The Learning Function: From Data to Decisions
Learning — using evidence from monitoring and evaluation to improve programme design and implementation — is the least developed of the three MEL functions. Most organisations have strong monitoring systems and conduct evaluations. Far fewer have systems that reliably translate evidence into changed practice.
Why learning is hard: (1) Evidence takes time to collect and analyse — often too late for the decision it was meant to inform; (2) Findings that show poor performance create institutional defensiveness, not curiosity; (3) Learning requires space for honest internal conversation — rare in high-pressure delivery environments; (4) Staff turnover means learning leaves when people leave; (5) Donors rarely fund learning activities explicitly — it's treated as overhead.
Building a Learning System: Key Components
  • Regular review rhythm: Monthly field review meetings; quarterly programme reviews with evidence; annual strategic reflection. Not just financial reporting.
  • Structured learning events: After-action reviews; pause-and-reflect sessions; peer learning across programmes
  • Documentation: Learning notes; lesson logs; failure reports (actively encouraged). Not just evaluation reports that nobody reads.
  • Decision protocols: Explicit agreements about which evidence triggers which decisions. If attendance drops below X%, we do Y. If outcome indicator is off-track at midterm, we convene a review within 30 days.
  • Knowledge management: Where is learning stored? Who has access? How does it inform new programme design?
ImpactMojoMEL Basics 101www.impactmojo.in
Adaptive Management: Adjusting Course Based on Evidence
Adaptive management is the organisational practice of systematically adjusting programme design and delivery in response to evidence from monitoring and evaluation. It is the operationalisation of the L in MEL — not a concept but a set of practices and conditions that allow an organisation to change course when evidence indicates it should.
Adaptive Management
A structured, intentional approach to learning from implementation and adjusting accordingly. It involves: (1) regular evidence review; (2) willingness to challenge assumptions; (3) authority to make changes at programme level without lengthy approval chains; (4) documentation of what was changed and why. FCDO's "Thinking and Working Politically" and USAID's "CLA" (Collaborating, Learning, Adapting) are the major frameworks.
CLA — Collaborating, Learning, Adapting (USAID): USAID's CLA framework requires implementing partners to actively collaborate with other actors (not work in silos), invest in learning (not just data collection), and deliberately adapt programmes based on evidence. CLA is assessed in project design, annual reviews, and performance management. It represents a significant shift from compliance-focused MEL to learning-focused MEL.
What prevents adaptation: (1) Donor approval requirements for any programme change — creating institutional disincentive to adapt; (2) Fear of appearing to have "failed" if original plan changes; (3) Staff incentives tied to delivery of the original workplan rather than achievement of outcomes; (4) No systematic review process — evidence exists but is never formally reviewed; (5) Leadership that doesn't create space for honest internal discussion of what isn't working.
ImpactMojoMEL Basics 101www.impactmojo.in
PDIA: Problem-Driven Iterative Adaptation
Problem-Driven Iterative Adaptation (PDIA) is an approach to development practice and evaluation developed by Matt Andrews, Lant Pritchett, and Michael Woolcock at Harvard's Building State Capability programme. It starts from a specific, locally-defined problem — not a donor-designed solution — and iterates towards solutions through rapid experimentation and learning.
Traditional Model
Plan → Implement → Evaluate
Pre-designed solution; implementation fidelity valued; evaluation at end. Assumes you know the solution in advance.
PDIA Model
Problem → Iterate → Learn → Adapt
Problem-driven; rapid cycles of action and reflection; solution emerges from local experimentation. MEL is built into each iteration.
PDIA in practice: (1) Deconstruct the problem — what are its component parts? What could be changed? (2) Push authority to the team closest to the problem — they can experiment. (3) Run rapid experiments — small, low-cost tests of potential solutions. (4) Reflect and learn — what worked? Why? (5) Iterate — don't scale until you understand what you're scaling. PDIA explicitly rejects the logframe-and-scale model for complex state capability and governance problems.
PDIA and MEL: MEL in a PDIA approach is continuous and embedded — not a periodic assessment function. Data is collected in real-time; learning loops are weekly, not annual. The team runs mini-evaluations constantly: "Did this experiment work? What does the evidence say? What do we try next?" This requires a fundamentally different MEL architecture from compliance-driven logframe monitoring.
ImpactMojoMEL Basics 101www.impactmojo.in
Developmental Evaluation: Evaluation as a Learning Partner
Developmental Evaluation (DE), developed by Michael Quinn Patton, is designed for programmes working in complex, dynamic situations where the programme is still being developed. Unlike summative evaluation (which assesses a completed programme) or formative evaluation (which improves a defined programme), DE partners with innovators throughout the development process — using evaluation thinking to support learning and adaptation in real time.
Developmental evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments.
Michael Quinn Patton — Developmental Evaluation (2011)
When Developmental Evaluation Is Appropriate
  • Social innovation: The programme is genuinely new — no established best practice; the team is figuring out the approach as they go
  • Complex adaptive systems: The context is dynamic; the programme needs to change as the context changes; emergence is expected
  • Scaling: Moving a proven approach to a new context where it may work differently
  • Crisis response: Rapidly evolving situation where standard planning is impossible
DE vs standard evaluation: A standard evaluation assesses against pre-determined criteria. DE helps the programme figure out what its criteria should be as it learns. A standard evaluator stands outside the programme. A developmental evaluator is embedded within it — using systems thinking, real-time data, and facilitated reflection to support the team's learning. Not all funders accept DE instead of logframe-based MEL — negotiate early.
ImpactMojoMEL Basics 101www.impactmojo.in
Learning Loops: Single, Double, and Triple
Chris Argyris's theory of organisational learning distinguishes between types of learning by how deeply they challenge existing assumptions. Development MEL systems typically enable single-loop learning at best. Moving to double and triple loop learning is what distinguishes genuinely learning organisations from compliance-oriented ones.
Single-Loop Learning
Detect error; correct within existing norms and strategy. "Our coverage is low — let's hire more field staff." Does not question whether the approach is right.
Double-Loop Learning
Detect error; question the norms and strategy themselves. "Our coverage is low — is field-based delivery the right model? Are we reaching the right people?" Challenges underlying assumptions.
Triple-Loop Learning
Question how we learn — our learning systems themselves. "Why do we consistently fail to detect coverage problems early? What is wrong with our MEL system?" Organisational epistemology.
Why development organisations stay at single-loop: Double and triple-loop learning require challenging leadership decisions, funding model assumptions, and organisational identity. "Our approach to community mobilisation isn't working" implies that senior leadership chose the wrong strategy — which creates political risk. MEL that only produces single-loop learning is structurally safe and structurally limited.
Creating conditions for double-loop learning: (1) Leadership that explicitly invites challenge: "What are we getting wrong? What should we do differently?"; (2) Psychological safety — staff must be able to report problems without career consequences; (3) External facilitation of strategic reviews — an external perspective breaks groupthink; (4) Regular assumption testing built into MEL plan; (5) Failure reports rewarded, not punished.
ImpactMojoMEL Basics 101www.impactmojo.in
Knowledge Management: Making Learning Institutional
Knowledge management (KM) is the set of systems and practices through which organisations capture, organise, share, and use the knowledge they generate. In development organisations, learning that is not institutionalised leaves when people leave. High staff turnover — common in the sector — means the same mistakes are made repeatedly, the same questions answered from scratch, and the same communities re-surveyed for information already collected.
40–60%
Annual staff turnover rates in Indian development NGOs in some studies — meaning learning leaves faster than it is institutionalised
Exit gap
Most organisations have no structured process for capturing knowledge when experienced staff leave — it walks out the door
Practical KM for Development Organisations
  • Programme documentation: Not just reports — decision logs, assumption tests, implementation lessons, what changed and why. Stored accessibly, not in email chains.
  • After-action reviews: Structured reflection at end of each major activity. What went well? What didn't? What do we do differently next time? 60 minutes; documented.
  • Learning notes: Short (1–2 page) structured summaries of key lessons from programmes — not long evaluation reports. Shareable across teams.
  • Onboarding knowledge transfer: New staff given structured access to programme history, decisions made, lessons learned — not starting from scratch.
  • Exit interviews: Structured departure process to capture what leaving staff know that isn't written down.
  • Cross-programme learning: Platforms for sharing what works across programmes within an organisation — not siloed learning.
ImpactMojoMEL Basics 101www.impactmojo.in
Community Feedback Mechanisms: Closing the Accountability Loop
Community feedback mechanisms — systems through which programme participants can communicate their experiences, concerns, and suggestions to the organisation — are a critical but widely underdeveloped component of learning systems. They represent accountability downward (to communities) rather than upward (to donors).
ALNAP community feedback research: The Active Learning Network for Accountability and Performance in Humanitarian Action has documented systematically that beneficiary feedback mechanisms are: (1) rarely designed with communities; (2) often not accessible to the most marginalised; (3) frequently not responded to — making them performative rather than functional; (4) almost never used to change programme design. Feedback without response is not accountability.
Designing Functional Feedback Mechanisms
  • Accessible: Multiple channels (hotline, SMS, suggestion box, community interface person, regular feedback meetings) for different literacy levels and comfort with technology
  • Confidential: Mechanisms that don't expose complainants to retaliation — anonymous options essential for sensitive feedback
  • Responsive: Every complaint/feedback acknowledged; substantive response provided; feedback loop closed. If you don't respond, the mechanism will stop being used.
  • Disaggregated: Track who is using the mechanism — if it only reaches literate, male, higher-caste community members, it's not reaching the most marginalised
  • Integrated with MEL: Feedback data reviewed at programme review meetings; patterns analysed; decision trails documented
Core Humanitarian Standard: For humanitarian and many development organisations, the Core Humanitarian Standard (CHS) — a set of nine commitments to quality and accountability — includes community participation and feedback as a central requirement. CHS verification is increasingly required by major donors.
ImpactMojoMEL Basics 101www.impactmojo.in
09
Section Nine
Communicating Evidence
ImpactMojoMEL Basics 101www.impactmojo.in
Writing MEL Reports That Are Actually Used
Most development evaluation reports are not read beyond the executive summary — and often not even that. They are produced for compliance, filed, and forgotten. The discipline of writing reports that generate genuine engagement and action is separate from the discipline of generating good data — and equally important.
Why reports don't get used: (1) Too long — 80-page reports are never read in full; (2) Written for evaluators, not for decision-makers — technical language, excessive methodology description, insufficient actionable findings; (3) Finding buried in passive voice and hedging; (4) Recommendations are vague ("further strengthen community engagement") not actionable ("replace monthly community meetings with fortnightly meetings focused on X"); (5) No follow-up on whether recommendations were implemented.
The key question before writing: Who will use this report? For what decision? That question should determine the structure, language, length, and findings emphasis — not the template provided by the donor or evaluator's preferences.
MEL Report Writing: Key Principles
  • Lead with findings, not methodology: The executive summary should state findings first; methods in an appendix for those who need them
  • Be direct about significance: Not "data suggests a possible trend toward improved outcomes" but "women's agricultural income increased 28% compared to a 6% increase in the comparison group"
  • Show uncertainty honestly: Confidence levels; sample size limitations; alternative interpretations — but state them clearly, not as an escape hatch
  • Make recommendations specific and assigned: "Programme team should revise the training curriculum by Q2 2025 to address low retention rates in Module 3" — not "training quality should be improved"
  • Balance positive and negative: A report with no negative findings is not credible. A report with useful negative findings, contextualized and with recommendations, is valuable.
ImpactMojoMEL Basics 101www.impactmojo.in
Data Visualisation for MEL: Making Evidence Legible
Well-designed data visualisation makes evidence accessible to audiences who won't read tables — programme staff, community representatives, government officials, and media. Poorly designed visualisation obscures findings, misleads, or simply confuses. Data visualisation is a communication skill, not a technical one.
Chart TypeBest ForCommon Misuse
Bar chartComparing quantities across categoriesToo many bars; non-zero axis baseline inflating differences
Line chartTrends over time; continuous dataUsed for unconnected categories; cherry-picked time period
Pie chartParts of a whole — with ≤5 segmentsToo many segments; 3D effects that distort area
Scatter plotRelationship between two variablesWithout a regression line or correlation statistic when one is implied
Map / choropethGeographic variation in indicatorsUsing raw counts instead of rates; misleading colour scales
The non-zero baseline error: The most common data visualisation manipulation in development reporting — starting a bar chart's Y-axis at a value above zero. A change from 40% to 44% shown on an axis from 38% to 46% looks like a dramatic improvement. On a proper 0–100% axis, it is modest. Always check that progress visualisations use zero-origin axes unless the scale is genuinely irrelevant to interpretation.
Show disaggregation visually: Disaggregated data should be disaggregated visually — not averaged out. If women's outcomes and men's outcomes differ significantly, show them in separate bars or separate lines. Averaging hides systematic exclusion. Side-by-side comparisons of subgroup data are one of the most powerful ways to make equity issues legible to non-technical audiences.
ImpactMojoMEL Basics 101www.impactmojo.in
Communicating Evidence to Different Audiences
AudienceWhat They Care AboutFormat & LengthLanguage & Framing
Donor / funderCompliance with indicators; value for money; lessons for future fundingFormal report, 30–60 pages; PPT summary; structured findingsTechnical; aligned to donor frameworks (OECD-DAC); quantitative where possible
Programme leadershipWhat to do differently; strategic decisions; risks to addressManagement brief, 5–10 pages; presentation; actionable recommendationsDirect; decision-oriented; findings ranked by significance
Field teamsWhat is working and what isn't in their area; practical changesVisual one-pager; verbal debrief; local-language summarySimple; concrete examples from their context; focus on what they control
CommunitiesWhat happened with their data; whether they will be heard; practical changes to the programmeCommunity meeting; visual display; verbal presentation in local languageNon-technical; participatory; feedback invited; specific to their context
GovernmentAlignment with government programmes; evidence for policy; scalabilityPolicy brief, 2–4 pages; meeting presentationAligned to government priorities; evidence quality emphasised; scale potential
Media / publicHuman interest; sector significance; accountabilityPress release; infographic; story-led summaryPlain language; specific numbers; compelling case studies
ImpactMojoMEL Basics 101www.impactmojo.in
Using MEL Evidence for Policy Influence
Good MEL evidence can influence government policy, sector practice, and funding priorities — but only if it is communicated to the right people at the right time in the right form. Evidence does not speak for itself. Policy influence requires understanding how policy is actually made — which is rarely through rational evidence uptake alone.
The evidence-policy gap: Policy is shaped by politics, institutional interests, existing commitments, and power — not just evidence. Research by Overseas Development Institute shows that "what works" evidence is rarely the primary driver of policy decisions, even when available. Evidence enters policy through windows of opportunity, champions within government, and fit with the political agenda. MEL practitioners who understand this use different strategies than those who simply publish findings.
Evidence-to-Policy: What Actually Works
  • Build relationships early: Engage government officials during programme design, not after evaluation. Co-creation produces champions.
  • Frame around government priorities: Evidence that shows how your programme advances the government's stated goals gets further than evidence that challenges them
  • Produce policy-ready formats: Two-page policy briefs; structured presentations; comparison with government programme data — not 80-page evaluation reports
  • Identify policy windows: Budget cycles; new government programmes being designed; elections and new administration priorities — these are when evidence gets heard
  • Build coalitions: Evidence championed by multiple organisations, sector networks, and respected institutions is harder to ignore than a single NGO's self-evaluation
ImpactMojoMEL Basics 101www.impactmojo.in
10
Section Ten
MEL in Practice: Common Failures
ImpactMojoMEL Basics 101www.impactmojo.in
10 Common MEL Failures — Numbers 1 to 5
1. MEL designed after programme, not before
Retrofit MEL produces indicators disconnected from the programme's actual theory and no baseline data. MEL must be co-designed with the programme at inception.
2. No baseline data collected
Without a baseline, the endline shows conditions but not change. Reconstructing a baseline post-hoc from recall or administrative data is methodologically weak.
3. Indicator overload
50+ indicators produce data that cannot be collected reliably, analysed meaningfully, or used for decisions. The discipline of selecting fewer, better indicators is the harder MEL skill.
4. Outputs reported as outcomes
Training counts, meeting numbers, and beneficiary reached figures appear in outcome rows. Donors and organisations both collude in this — it's easier to count than to measure change.
5. Data collected but not used
Field teams collect data; it goes to headquarters; it appears in a report; nobody makes a decision based on it. The decision to collect data must be preceded by a decision about how it will be used.
ImpactMojoMEL Basics 101www.impactmojo.in
10 Common MEL Failures — Numbers 6 to 10
6. Evaluation findings ignored
The evaluation is conducted; the report is submitted; nothing changes. No action plan; no follow-up; no accountability for recommendations. The most expensive MEL failure because significant resources were spent for no organisational benefit.
7. Lack of disaggregation
Aggregate data hides differential outcomes. A programme may show strong average gains while systematically excluding Dalit women, conflict-affected households, or the poorest quintile. Disaggregation by gender, caste, disability, and geography is not optional.
8. No learning from failure
Programmes that don't work are terminated or quietly redesigned without documented lessons. The sector lacks a culture of structured failure learning — which means the same failures recur across organisations and contexts.
9. MEL seen as the MEL officer's job
When MEL is siloed in one person or department, programme staff do not own the data, do not question findings, and do not integrate learning into their work. MEL as an organisational function requires whole-team ownership.
10. Confusing compliance MEL with learning MEL
The MEL system is designed entirely around donor reporting requirements. It produces data in the formats funders need but generates no internally useful learning. The system treats MEL as a cost of doing business, not as a driver of programme quality.
ImpactMojoMEL Basics 101www.impactmojo.in
MEL Culture: When the Problem Is Not Technical
Many MEL failures are not caused by poor technical skills or inadequate tools. They are caused by organisational culture — the unwritten rules about what is safe to say, what gets rewarded, and what is allowed to fail. Technical MEL skills are easier to address than cultural ones — but cultural change is where sustainable MEL improvement lies.
Culture eats strategy for breakfast. And it eats MEL systems for lunch.
ImpactMojo adaptation — based on Drucker's observation on strategy and culture
Signs of a Healthy MEL Culture
  • Leadership publicly acknowledges what isn't working and asks what the evidence suggests they do differently
  • Staff raise problems with current approaches in review meetings without fear of consequence
  • Negative evaluation findings are presented to funders with analysis and proposed responses — not buried in appendices
  • Programme changes are documented with the evidence that prompted them
  • MEL is discussed at all-staff meetings, not only in MEL team meetings
Signs of a Compliance MEL Culture
  • Reports are written to satisfy donors, not to generate internal learning
  • Field staff see data collection as additional burden, not as part of their work
  • Evaluation findings are reviewed once at a meeting and then filed
  • Staff are afraid to report that targets are not being met
ImpactMojoMEL Basics 101www.impactmojo.in
11
Section Eleven
MEL for Different Contexts
ImpactMojoMEL Basics 101www.impactmojo.in
MEL in Humanitarian Contexts: Speed, Uncertainty, and Accountability
Humanitarian MEL operates under conditions that make standard development MEL difficult or impossible: rapid onset crises, displaced populations, insecurity, extremely short timelines, and high stakes for both beneficiaries and organisations. Methods must be adapted to this context — not simply transplanted from stable development settings.
Key differences from development MEL: (1) No baseline is often possible — populations were displaced before data collection; (2) Population movement makes tracking individuals impossible; (3) Standard survey methods are inappropriate in insecure or chaotic contexts; (4) Real-time accountability to beneficiaries matters most when they are most vulnerable; (5) Evaluation must happen faster to be useful for ongoing response decisions.
Adapted MEL Methods for Humanitarian Response
  • Rapid needs assessment: Fast, often phone-based surveys with limited sampling rigour — used for operational decisions, not impact claims
  • MEAL (Monitoring, Evaluation, Accountability, Learning): The addition of "Accountability" explicitly recognises feedback mechanisms as a core function, not an add-on
  • Real-time monitoring: Weekly or daily output tracking; early warning indicators; operational dashboards
  • Community consultation: FGDs and KIIs conducted rapidly; findings fed back into response design within days, not months
  • Sphere standards: Minimum standards for humanitarian response (water, food, shelter, health) provide benchmarks for rapid assessment without full baseline data
  • Inter-agency coordination: In large-scale responses, MEL is coordinated through clusters (UN coordination mechanism) to avoid duplication and ensure compatible data
ImpactMojoMEL Basics 101www.impactmojo.in
MEL for Advocacy and Policy Change
Advocacy programmes — those seeking to change policies, laws, norms, or resource allocations — are among the hardest to evaluate. The change they seek is non-linear, long-term, and shaped by political dynamics outside the organisation's control. Standard logframe indicators ("number of policy briefs produced") cannot capture whether advocacy is succeeding.
Why advocacy MEL is different: Attribution is nearly impossible — policy change results from multiple actors, political windows, and historical context. Causation is deeply contested. Timelines are long (policy change may take 10–15 years). The "beneficiaries" are often future populations, not current programme participants. And the most important changes are often relationships, influence, and agenda-setting — all hard to measure.
Advocacy MEL Approaches
  • Milestones tracking: Key policy milestones tracked over time — bill introduced; committee hearing; ministerial meeting; government consultation launched; policy adopted; budget allocated
  • Influence mapping: Who are the key decision-makers? What is the organisation's relationship with each? Has it changed? Periodic stakeholder relationship mapping.
  • Policy monitoring: Track whether target policies are implemented as enacted — the gap between policy adoption and implementation is often where advocacy work falls short
  • Most Significant Change: Captures unexpected advocacy wins and losses; useful for understanding what combination of factors opened or closed policy windows
  • Contribution narrative: Annual assessment of the organisation's contribution to observed policy changes — acknowledging other actors and factors
ImpactMojoMEL Basics 101www.impactmojo.in
MEL for Small NGOs: Proportionate, Practical, and Useful
Small organisations — working on budgets under ₹2 crore, with no dedicated MEL staff — need MEL systems that are proportionate to their scale and capacity. The same principles apply, but the tools and approaches must be simpler, cheaper, and more embedded in normal programme operations.
Minimum viable MEL for a small NGO: (1) A clear, one-page Theory of Change; (2) 5–8 indicators with simple, defined data sources; (3) A baseline — even a light-touch one using existing secondary data and a small FGD; (4) Quarterly data review built into team meetings; (5) Annual reflection on whether the approach is working; (6) One learning note per year capturing what changed and why. This is MEL. It doesn't require a logframe of 50 indicators or a dedicated MEL officer.
Free MEL Resources for Small Organisations
ResourceWhat It ProvidesAccess
BetterEvaluation.orgComprehensive, free database of evaluation methods, tools, and examplesbetterevaluation.org — free
3ie Evidence PortalSystematic reviews and impact evaluations; sector evidence base3ieimpact.org — free
KoBoToolboxFree mobile data collection; up to 10,000 submissions/monthkobotoolbox.org — free for NGOs
NFHS district dataBaseline data for health and women's empowerment indicators at district levelrchiips.org — free download
Harvard PDIA toolkitPDIA approach materials; guides for problem-driven adaptive workbuildingstatecapability.hks.harvard.edu — free
ImpactMojoMEL Basics 101www.impactmojo.in
MEL for Government Programmes: Scale, Politics, and MIS
Government programmes in India operate at a scale where MEL faces unique challenges. A scheme reaching 100 million beneficiaries cannot be evaluated with a single pre-post survey. Political incentives create systematic pressure to report success. And the line between MEL as accountability and MEL as political performance is often blurred.
India's government MEL landscape: NITI Aayog's DMEO evaluates flagship central schemes — with varying quality and independence. The Planning Commission's evaluation function (now DMEO) has historically been more candid than ministry self-evaluations. State governments have development monitoring units of varying capacity. DARPG runs the Centralized Public Grievance Redress and Monitoring System (CPGRAMS) — a national citizen feedback mechanism that, in principle, provides real-time demand-side data on service delivery.
Working with Government MIS for MEL
  • MGNREGS MIS: Real-time data on employment, wages, works. Available at gram panchayat level at nrega.nic.in. Incentives for over-reporting known; useful for trend analysis.
  • PM-KISAN dashboard: Beneficiary-level data on cash transfer receipt. Can be used to verify targeting against other data sources.
  • HMIS: Monthly facility-level health service data. Block and district level available. Significant reporting gaps in remote areas.
  • UDISE+: Annual school-level education data. Enrollment, retention, infrastructure, teacher data. High coverage; some data quality issues.
  • Aspirational Districts: Delta ranking system comparing district performance on 49 indicators across 5 sectors — publicly available, updated monthly. Useful for NGOs working in aspirational districts to contextualise their work.
ImpactMojoMEL Basics 101www.impactmojo.in
12
Section Twelve
Further Reading & Resources
ImpactMojoMEL Basics 101www.impactmojo.in
Foundational Texts in MEL
Theory of Change & Planning
  • Weiss, C.H. (1995) — "Nothing as Practical as Good Theory" — foundational paper on theory-based evaluation
  • Mayne, J. (2001) — Contribution Analysis: An Approach to Exploring Cause and Effect — Treasury Board Canada
  • Pawson, R. & Tilley, N. (1997) — Realistic Evaluation — Sage Publications
  • Patton, M.Q. (2011) — Developmental Evaluation — Guilford Press
Impact Evaluation
  • Gertler et al. (2016) — Impact Evaluation in Practice (2nd ed.) — World Bank; free PDF download
  • Banerjee & Duflo (2011) — Poor Economics — random experiments and poverty; accessible for non-economists
  • Deaton & Cartwright (2018) — "Understanding and Misunderstanding Randomized Controlled Trials" — Social Science & Medicine; the key RCT critique
  • Andrews, Pritchett & Woolcock (2017) — Building State Capability — PDIA framework; free from Harvard
India-Specific MEL Resources
  • DMEO evaluations — niti.gov.in/dmeo — evaluations of flagship schemes; publicly available; variable quality but valuable for understanding government assessment of its own programmes
  • 3ie India portfolio — 3ieimpact.org — systematic reviews and impact evaluations with India focus; particularly strong on health, WASH, and agriculture
  • IDInsight India work — idinsight.org — applied impact evaluation and MEL advisory with several India NGO case studies
  • IFMR LEAD — ifmrlead.org — research on financial inclusion, microfinance, and rural livelihoods; strong quantitative evaluations in Indian contexts
Open Access Journals & Platforms
  • Journal of Development Effectiveness — peer-reviewed; broad MEL methods and findings
  • BetterEvaluation.org — the most comprehensive free MEL methods database
  • ALNAP — alnap.org — humanitarian MEL; accountability research
ImpactMojoMEL Basics 101www.impactmojo.in
MEL Vocabulary: 20 Terms Every Practitioner Should Know
TermDefinition
Theory of ChangeCausal hypothesis about how programme activities lead to intended impact, with explicit assumptions
Logframe4×4 planning matrix linking hierarchy of results to indicators, means of verification, and assumptions
CounterfactualWhat would have happened in the absence of the programme — the fundamental question in impact evaluation
AttributionDegree to which outcomes can be ascribed to the programme rather than to other factors
ContributionThe programme's role in producing outcomes alongside other factors — honest alternative to attribution claims
BaselinePre-programme measurement of outcome indicators, collected before the intervention begins
SMART indicatorSpecific, Measurable, Achievable, Relevant, Time-bound
Goodhart's LawWhen a measure becomes a target, it ceases to be a good measure
RCTRandomised Controlled Trial — gold standard experimental design for causal attribution
DiDDifference-in-Differences — quasi-experimental design comparing change in treatment vs control groups over time
TermDefinition
OECD-DAC criteriaSix evaluation criteria: relevance, coherence, effectiveness, efficiency, impact, sustainability
Outcome harvestingEvaluation method that identifies outcomes that occurred and works backwards to assess contribution
MSCMost Significant Change — participatory method collecting stories of significant change from participants
Realist evaluationContext + Mechanism + Outcome — asks what works for whom in what circumstances
PDIAProblem-Driven Iterative Adaptation — iterative, learning-based approach to complex development challenges
CEACost-Effectiveness Analysis — cost per unit of outcome achieved
Data qualityFive dimensions: validity, reliability, timeliness, precision, integrity (USAID DQA framework)
Formative evaluationEvaluation during implementation to improve the programme — contrasted with summative (did it work?)
Adaptive managementStructured practice of adjusting programme design based on evidence from monitoring and evaluation
MEALMonitoring, Evaluation, Accountability, and Learning — adds explicit accountability to communities
ImpactMojoMEL Basics 101www.impactmojo.in
MEL Tools Reference: What to Use When
MEL StageWhat You NeedFree Tools / ResourcesWhen to Use It
DesignToC development; results framework; indicator selectionUSAID Iris indicator library; BetterEvaluation; ODI's RAPID frameworkBefore programme start — non-negotiable
BaselineSurvey design; sampling; data collection; analysisKoBoToolbox; ODK; NFHS district data; Census; PLFSBefore programme activities begin with target population
MonitoringRegular output tracking; MIS; field data collectionKoBoToolbox; Google Sheets; DHIS2; CommCareOngoing throughout programme — monthly at minimum
Qualitative assessmentFGD guides; KII guides; MSC collection; observation protocolsBetterEvaluation method database; ODI RAPID qualitative toolsQuarterly or as needed for deeper understanding
Mid-term evaluationMixed methods; quantitative progress review; qualitative deep diveOECD-DAC criteria; process tracing templates; contribution analysis guideAt programme midpoint — typically Year 2 of a 4-year programme
Endline evaluationFull outcome survey; comparison analysis; economic analysisImpact Evaluation in Practice (World Bank — free); GiveWell CEA frameworkAt or near programme close
LearningAfter-action review templates; learning notes; KM platformUSAID CLA toolkit; Harvard PDIA materials; ALNAP learning toolsOngoing; structured review events quarterly and annually
ImpactMojoMEL Basics 101www.impactmojo.in
India's Open Data Ecosystem for MEL Practitioners
India has an unusually rich open data ecosystem for a middle-income country. Development practitioners who know where to look can access district-level baseline data, programme MIS data, and administrative statistics that dramatically reduce the cost of evidence-informed MEL — without primary data collection for every indicator.
Key National Data Sources
SourceCoverageAccess
data.gov.inNational open data portal — thousands of government datasets across sectorsdata.gov.in — free
NSO / MoSPINational Statistics Office — NSSO surveys, census, PLFS, CESmospi.gov.in — free microdata
IIPS / NFHSNational Family Health Survey — health, nutrition, women's empowerment by districtrchiips.org — free download
DevdatalabEconomic research data for India — Asher, Novosad and colleaguesdevdatalab.org — free
SHRUGSocioeconomic High-resolution Rural-Urban Geographic dataset for India — village-level data 1990–2011devdatalab.org/shrug — free
Sector-Specific Dashboards
  • nrega.nic.in — MGNREGS MIS; panchayat-level employment and wage data; real-time
  • pmkisan.gov.in — PM-KISAN beneficiary data; state-wise dashboard
  • udiseplus.gov.in — School-level education data; enrollment, retention, infrastructure
  • hmis.nhp.gov.in — Monthly health facility data; block and district level
  • bharatkosh.gov.in — Government receipts data useful for budget tracking
  • pib.gov.in — Press Information Bureau — official government programme announcements and data releases
Using geospatial data: BHUVAN (bhuvan.nrsc.gov.in) provides satellite imagery and GIS data for India. Combined with socioeconomic indicators, it allows mapping of programme coverage against poverty, infrastructure, and climate risk — increasingly used in targeting and evaluation.
ImpactMojoMEL Basics 101www.impactmojo.in
MEL as a Career: Roles, Skills, and Growth Pathways
MEL has grown from a compliance function into a recognised professional domain within the development sector. Dedicated MEL roles exist at every level — from field MEL officer to MEL director to research and evaluation specialist. The skills required are genuinely cross-disciplinary: research methods, data analysis, programme design, communication, and facilitation.
Role LevelTypical ResponsibilitiesSkills Needed
Field MEL OfficerData collection; tool administration; community interaction; data entry and basic analysisSurvey administration; KoBoToolbox/ODK; local language; data entry
MEL Coordinator / AnalystIndicator tracking; report drafting; results framework management; basic statistical analysisExcel/SPSS; report writing; logframe management; qualitative methods
MEL ManagerMEL system design; evaluation commissioning; donor reporting; team managementEvaluation design; quantitative and qualitative methods; ToC facilitation; donor frameworks
Research / Evaluation SpecialistImpact evaluation design; advanced statistical analysis; methods development; sector knowledge generationR/Stata/Python; experimental and quasi-experimental methods; academic writing; research ethics
Skills gap in Indian MEL: The sector faces a shortage of practitioners who combine: (1) strong quantitative skills (sample size calculation, statistical analysis, quasi-experimental methods); (2) qualitative research skills (FGD facilitation, thematic analysis); (3) programmatic understanding (knowing what MEL findings mean for implementation); and (4) communication skills (translating findings for non-technical audiences). Most practitioners are strong on 1 or 2 of these — few on all four.
Building your MEL skills: ImpactMojo's MEL flagship course goes significantly deeper than this 101 deck — covering advanced evaluation methods, statistical analysis for MEL, and hands-on logframe and ToC construction. DevDiscourses has 500+ open-access papers including the core MEL literature cited in this deck. All free at impactmojo.in.
ImpactMojoMEL Basics 101www.impactmojo.in
Where MEL Is Heading: Emerging Trends for Practitioners
AI and Machine Learning in MEL
Satellite data analysis, natural language processing for qualitative data coding, predictive analytics for programme targeting. Already used by IDInsight and J-PAL for rapid assessment. Raises new data ethics questions around consent and algorithmic bias.
Decolonising Evaluation
Growing movement to shift evaluation design, questions, and interpretation from Northern donor frameworks to locally-led standards. AEA equity statement; African Evaluation Association; Centre for Development Innovation's work on locally-led evaluation. Relevant for Indian organisations asserting their own evaluation standards against FCDO and USAID requirements.
Real-Time Feedback Systems
SMS, WhatsApp, and IVR-based rapid feedback mechanisms. Collect beneficiary experience data continuously rather than in annual surveys. Ushahidi, Feedback Labs, and several India-specific platforms. Raises response capacity challenges — feedback collected but not responded to is worse than none.
Equity-Focused MEL
Moving beyond gender disaggregation to full intersectional analysis — caste, disability, migration status, age. Emerging tools for measuring exclusion (Social Inclusion Index), disability-inclusive MEL frameworks from CBM, and caste-disaggregated data collection in India-specific contexts.
Open Data and Transparency
Growing norm of publishing evaluation findings and data — not just positive reports. AidData, DevTracker, and India-specific portals increase transparency. IATI standard for aid data transparency increasingly required by international donors. Growing evidence that organisations that publish negative findings build more credibility.
Climate-Adaptive MEL
Development outcomes are increasingly disrupted by climate events — droughts, floods, heatwaves. MEL systems need climate risk integration: tracking how climate events affect programme outcomes; building climate vulnerability into baseline data; distinguishing programme effects from climate-driven changes.
ImpactMojoMEL Basics 101www.impactmojo.in
MEL Practitioner Self-Assessment: Where Are You?
Foundation Level
  • Can explain MEL as an integrated system, not three separate functions
  • Can distinguish outputs from outcomes from impact — with examples
  • Can explain why a Theory of Change matters and what a good one includes
  • Can apply SMART criteria to assess or improve an indicator
  • Can name at least 3 types of evaluation and when each is appropriate
  • Can explain attribution and why claiming it is usually unjustified in development contexts
Intermediate Level
  • Can design a minimum viable MEL plan for a programme with 2–3 outcomes
  • Can explain DiD and RCT — their logic, assumptions, and limitations
  • Can design gender-responsive indicators that go beyond sex disaggregation
  • Can write evaluation questions that are specific and answerable
Advanced Level
  • Can commission a rigorous external evaluation and manage the evaluator relationship
  • Can design a quasi-experimental evaluation for a programme with phased rollout
  • Can facilitate a participatory ToC process with programme staff and community members
  • Can analyse and present evaluation findings to different audiences — technical and non-technical
  • Can assess and improve an organisation's MEL culture, not just its MEL tools
Next steps: ImpactMojo's MEL flagship course covers the intermediate and advanced levels in depth — with case studies, MCQs, and practical tools. Free at impactmojo.in/courses/mel/
ImpactMojoMEL Basics 101www.impactmojo.in
Quick MEL Diagnostic: Assessing Any Programme's MEL System
Use these ten diagnostic questions to rapidly assess the quality of a MEL system — whether you are designing a new programme, reviewing an existing one, or evaluating a partner organisation's MEL capacity.
#Diagnostic QuestionRed Flag if...Good Sign if...
1Does the programme have an explicit Theory of Change?It's a diagram with no explanatory text, or doesn't existIt's written, was developed with field staff, and is used in reviews
2How many indicators are in the results framework?More than 30 indicators totalUnder 20, with clear rationale for each
3Was baseline data collected before the programme?No baseline; or collected 6+ months after startCollected before first beneficiary contact; disaggregated
4Are outcomes measured, or only outputs?All indicators are training counts, meetings held, materials distributedAt least 3 outcome-level indicators with measurement plan
5Who uses the monitoring data?"We send it to the donor in the quarterly report""We review it monthly in team meetings and it triggers decisions"
6Has an external evaluation been conducted?Never; or the "evaluation" was a self-assessment with no independenceExternal evaluation with ToR; findings shared with management; action plan exists
7Are negative findings shared with stakeholders?Reports show only achievements against targetsReports include what didn't work and why
8Is data disaggregated by gender and other equity dimensions?All data is aggregate; no breakdown by gender, caste, geographyStandard disaggregation by gender; additional by caste/disability where relevant
9Is there a functional community feedback mechanism?A suggestion box that was installed but nobody monitorsMultiple channels; response protocols; data reviewed in programme meetings
10When the programme last changed because of evidence?"I can't think of a time" or "the donor wouldn't allow it"Specific example in the last 12 months, with documentation
ImpactMojoMEL Basics 101www.impactmojo.in
Sample Logframe: Women's Livelihoods Programme
LevelStatementIndicatorsMeans of VerificationAssumptions
Goal / ImpactImproved economic security and wellbeing for women in target districts of MP% women HHs below poverty line (NSSO definition); Women's MPI scoreHousehold survey (endline Y5); MPI calculation from surveyBroader economic environment remains stable; no major shocks
Purpose / OutcomeWomen have sustained income from own enterprises, with independent control over earnings% women with own enterprise operational at 18 months; % women with independent bank account; Avg monthly net income (₹)Enterprise tracking survey (Y2, Y4); bank account verification; income surveyMarkets remain accessible; women retain control over earnings; husbands supportive
Output 1Women trained in business skills and linked to creditNo. women completing 6-week training; % completing with 70%+ attendance; % accessing credit within 3 monthsTraining attendance register; assessment records; MFI disbursement recordsWomen can attend training (mobility; childcare); MFI partnership functional
Output 2Market linkages established for trained womenNo. buyer linkages established; % women with at least 1 formal buyer relationshipSales records; buyer contracts; field observationBuyers maintain commitment; transport infrastructure accessible
ActivitiesTraining delivery; business plan support; market linkage facilitation; ongoing mentoringInputs: ₹1.8Cr budget; 6 field staff; 3 district officesFinancial reports; HR recordsAdequate staff recruited and retained; budget released on schedule
ImpactMojoMEL Basics 101www.impactmojo.in
Sample MEL Plan Structure: Women's Livelihoods Programme
IndicatorLevelDefinitionBaselineY2 TargetY4 TargetData SourceFrequencyResponsible
Women HHs below poverty line (%)ImpactHHs where women are primary earner, below NSSO poverty line48%35%HH survey (primary)Baseline; Y4MEL Manager
Women with own enterprise operational at 18mo (%)OutcomeEnterprise operating with positive revenue for 3+ consecutive months at 18-month mark12%55%65%Enterprise tracker (primary)QuarterlyMEL Coordinator
Women with independent bank account (%)OutcomeAccount in own name with independent access, verified by passbook review31%70%80%Passbook verificationY1; Y2; Y4Field MEL Officer
No. women completing trainingOutputCompleted 6-week course with ≥70% attendance, as per attendance register06001,200Training registerMonthlyField Officer
% women with credit access within 3moOutputLoan disbursed by partner MFI within 3 months of training completion0%60%70%MFI recordsQuarterlyMEL Coordinator
The full MEL plan also includes: data collection tools for each indicator; quality assurance protocols; reporting schedule; budget for MEL activities (allocated at 7% of programme budget); and a learning calendar with dates for quarterly review meetings, mid-term evaluation (Month 24), and final evaluation (Month 46). MEL plan reviewed annually against implementation realities.
ImpactMojoMEL Basics 101www.impactmojo.in
Go Deeper: ImpactMojo Courses for MEL Practitioners
Flagship Course
Monitoring, Evaluation & Learning
The comprehensive MEL flagship course — covering advanced evaluation design, indicator development, quantitative and qualitative methods, and learning systems. Multiple modules, MCQs, case studies, and lexicon.
impactmojo.in/courses/mel/ →
Flagship Course
Data Visualisation for Development
How to present MEL evidence visually — chart selection, data storytelling, presenting to different audiences, and common visualisation errors in development reporting.
impactmojo.in/courses/dataviz/ →
Flagship Course
Development Economics
The economic theory behind development programmes — what evidence says about poverty, growth, agriculture, credit, and human capital. Provides the sector knowledge that makes MEL findings interpretable.
impactmojo.in/courses/devecon/ →
ImpactMojo 101 Series: This deck is part of the ImpactMojo 101 Series — standalone 100-slide primers covering development concepts at no cost. Other decks in the series cover Climate Essentials, Development Economics, and more. All free, forever, at impactmojo.in
ImpactMojoMEL Basics 101www.impactmojo.in
Gender-Responsive MEL: A Practical Toolkit
Gender Analysis Tools for MEL
ToolWhat It MeasuresWhen to Use
WEAI (Women's Empowerment in Agriculture Index)5 domains: decisions, resources, income, leadership, timeAgricultural programmes; baseline and endline
GiHA (Gender in Humanitarian Action)Gender-specific needs and capacities in emergency contextsHumanitarian response; rapid assessment
Oxfam Gender at Work frameworkFormal/informal rules and individual/social change across two axesWomen's rights and empowerment programmes
Social Norms Assessment (SoNA)Prevailing social norms affecting women's behaviour; peer expectationsBehaviour change programmes; SBCC evaluation
EMERGE standardsEvidence and Methods for Gender Responsive Evaluation — comprehensive frameworkAny evaluation with significant gender dimensions
Time use studies in gender MEL: One of the most neglected but revealing tools for gender analysis. Time use surveys measure how men and women spend their time across productive, reproductive, and care activities. In India, women perform on average 5–6 hours of unpaid care work daily (PLFS data) — invisible in most MEL systems. Programmes that improve women's incomes without reducing their unpaid care burden may not improve wellbeing and may increase time poverty.
Separate data collection for women: In mixed-gender contexts in many parts of rural India, women will not give candid responses in the presence of male family members or field staff. Women enumerators, separate interview spaces, and women-only FGDs consistently produce higher quality data on sensitive topics — income control, mobility, domestic violence experience, contraceptive use — than mixed-setting data collection.
ImpactMojoMEL Basics 101www.impactmojo.in
Evaluation Ethics: Standards and Responsibilities
Evaluation involves power — the evaluator has power over how findings are framed, which voices are amplified, and what gets reported. Ethical evaluation requires deliberate attention to these power dynamics — not just compliance with research ethics protocols.
Core Ethical Principles in Evaluation
  • Respect for persons: Treat participants as ends, not means. Informed consent, confidentiality, right to withdraw.
  • Beneficence: Maximise benefits, minimise harms — including the harm of conducting a poor-quality evaluation that wastes community time and produces useless findings
  • Justice: Fair selection of participants; representation of marginalised voices; equitable distribution of evaluation burdens and benefits
  • Independence: Evaluation findings must not be shaped by the interests of the commissioning organisation
  • Transparency: Methods, data, and funding sources disclosed; limitations acknowledged
The commissioner's influence problem: Evaluations are commissioned and paid for by the organisations whose programmes they assess. This creates systematic pressure — explicit or implicit — to produce positive findings. The most common manifestations: evaluators selected based on likelihood of producing positive findings; draft reports heavily edited before publication; negative findings downplayed or removed. These practices corrupt the evaluation function and should be named and resisted by evaluators.
Institutional Review Boards (IRBs) in India: For research-grade evaluations involving human subjects, IRB (or IEC — Institutional Ethics Committee) approval is required. The Indian Council of Medical Research (ICMR) sets guidelines for biomedical and health research ethics. For social science research, standards are less formalised but ICSSR guidelines apply. Larger international NGOs and academic partners will have their own IRB processes.
ImpactMojoMEL Basics 101www.impactmojo.in
Evidence-Informed Practice: Using Research in Programme Design
Beyond MEL for your own programmes, evidence-informed practice means drawing on the existing sector knowledge base when designing interventions — not starting from scratch. The development research literature is vast; navigating it is a skill that separates evidence-informed programmes from those designed on assumption and institutional habit.
Evidence Portals for Development
  • 3ie Evidence Portal (3ieimpact.org): Systematic reviews and impact evaluations across development sectors. Best for finding rigorous evidence summaries on what works.
  • Campbell Collaboration (campbellcollaboration.org): Systematic reviews in social welfare, education, crime and justice, international development.
  • J-PAL Policy Insights (povertyactionlab.org): Evidence summaries from RCTs; policy recommendations; India-specific evidence.
  • ImpactMojo DevDiscourses: 500+ open-access development papers curated for South Asian practitioners. impactmojo.in
Using evidence practically: Before designing a new programme, ask: (1) What does the evidence say about effective approaches to this problem in similar contexts? (2) What have comparable programmes found about the mechanisms that do and don't work? (3) What are the documented implementation challenges? (4) What outcome levels are realistic given the evidence base? Starting with evidence changes what you design, how you measure it, and what you expect.
The limits of evidence transfer: Evidence from one context does not automatically transfer to another. RCT findings from Kenya, Bangladesh, or Peru give useful signals — but context determines whether mechanisms fire. The question is not "does evidence say this works?" but "does evidence say this works in contexts similar to mine, and what are the key contextual factors?" This is where evidence synthesis and realist evaluation thinking are most valuable.
ImpactMojoMEL Basics 101www.impactmojo.in
Building MEL Capacity: Individual, Team, and Organisation
MEL capacity building is not just training staff in data collection tools. It requires developing competencies at individual, team, and organisational levels — and changing the systems and culture that determine whether those competencies are used.
LevelWhat to BuildHow
IndividualTechnical skills: survey design, data analysis, report writing, evaluation methodsCourses (ImpactMojo MEL); mentoring; peer learning; practice with feedback
TeamShared understanding of MEL purpose; collective data interpretation; learning cultureTeam-based learning events; joint data reviews; after-action reviews; cross-team learning
OrganisationMEL systems; data governance; learning processes; leadership that values evidenceMEL system audit; MEL plan development; leadership engagement; funder conversations about learning
The training-only trap: Sending a MEL officer to a training course does not build organisational MEL capacity. Individual training without supportive systems, leadership buy-in, and time for application rarely changes practice. Sustainable MEL capacity building requires: (1) leadership commitment to using evidence; (2) dedicated time for MEL activities in staff workplans; (3) systems that make using evidence the path of least resistance; (4) funding for MEL that is not treated as overhead.
MEL budgeting: Industry norm for MEL as percentage of programme budget: 5% for small, well-established programmes; 7–10% for medium complex programmes; 12–15% for programmes with high learning ambitions or complex evaluation designs. When MEL is underfunded, it produces incomplete data, overworked staff, and compliance-only outputs. Make the budget case using the cost of poor decisions from missing evidence — not just the cost of the MEL activities themselves.
ImpactMojoMEL Basics 101www.impactmojo.in
Sample Evaluation ToR Structure: Women's Livelihoods Programme Endline
Sections 1–4
  • 1. Background and Programme Description: Organisation; programme area (3 districts, MP); target population (women aged 18–50 from ST/SC households); programme budget (₹4.2Cr, 4 years); theory of change summary; what has been implemented
  • 2. Purpose and Audience: Endline evaluation; primary audiences: organisation's Board and FCDO programme officer; decision to be made: whether to seek Phase 2 funding and at what scale; secondary: state government partnership discussions
  • 3. Evaluation Questions (maximum 5): (1) Did women achieve sustained enterprise operation and income at programme close? (2) What factors distinguish high-performing from low-performing participants? (3) To what extent did the programme contribute to observed income changes, compared to other factors? (4) Were benefits sustained beyond programme support? (5) What implementation factors enabled or constrained outcomes?
  • 4. Evaluation Design: Mixed methods; quantitative survey with comparison group (Phase 2 districts where programme not yet implemented); qualitative component with FGDs and case studies; contribution analysis framework
Sections 5–8
  • 5. Data Requirements: Existing data available: baseline survey (n=420); quarterly monitoring reports; field records; MFI disbursement data. Primary data to collect: endline survey (n=400 treatment, n=200 comparison); 8 FGDs (4 treatment, 4 comparison); 12 KIIs; 6 case studies
  • 6. Ethics: IEC approval from [organisation's IEC]; informed consent protocol for all primary data; women enumerators for all female respondents; data storage protocol; anonymisation before analysis
  • 7. Deliverables and Timeline: Inception report (Week 3); preliminary findings (Week 10); draft evaluation report (Week 14); stakeholder validation workshop (Week 16); final report (Week 18); executive summary in Hindi (Week 19)
  • 8. Budget and Logistics: Evaluation budget ₹18L; includes field team costs, travel, data processing; organisation provides transport support; evaluator responsible for own accommodation
ImpactMojoMEL Basics 101www.impactmojo.in
A Final Word on MEL and the Sector
The development sector spends enormous resources on MEL — surveys, evaluations, reporting systems, MEL officers — and produces relatively little institutional learning. The gap between MEL investment and MEL value is not primarily a technical problem. It is a cultural and structural one.
The question is not whether we can measure what matters, but whether we are willing to be honest about what the measures show — especially when they show that we are wrong.
ImpactMojo — MEL Basics 101
What Makes the Difference
  • Organisations where leadership uses evidence to make decisions — not to justify decisions already made
  • Funders who reward honest reporting on what didn't work — not just what did
  • MEL practitioners who push back on overclaimed attribution and vague indicators — even when it's professionally uncomfortable
  • Communities who are given the data, the feedback channels, and the power to assess programmes themselves
  • Practitioners who see MEL as service to the people their programmes claim to serve — not as compliance with the people who fund them
You already have the tools. The concepts in this deck — Theory of Change, results chains, indicator design, evaluation design, contribution analysis, learning loops — are not advanced technical skills. They are frameworks for clear thinking about what you are trying to do, whether it is happening, and what you are learning. That discipline, applied honestly, is what good MEL is.
ImpactMojoMEL Basics 101www.impactmojo.in
ImpactMojo 101 Series
100 Slides.
Free Forever.
MEL Basics 101 is part of ImpactMojo's free open-access education platform for South Asian development practitioners. PhD-level rigor, zero cost.
www.impactmojo.in · CC BY-NC-SA 4.0 · Share freely, attribute always