| Period | Dominant Approach | Key Features |
|---|---|---|
| 1950s–70s | Project evaluation | Focus on infrastructure; input-output accounting; World Bank led; economic rates of return |
| 1970s–80s | Logical Framework introduced | USAID develops logframe (1969); structured planning tool becomes sector standard; criticism begins immediately |
| 1980s–90s | Participatory methods | PRA (Chambers), PAR emerge; community voice in assessment; qualitative methods gain legitimacy |
| 1990s–2000s | Results-based management | New Public Management; donor pressure for outcomes not inputs; MDGs accelerate indicator proliferation |
| 2000s–10s | RCT revolution | Randomised trials as "gold standard"; Banerjee, Duflo, Kremer; J-PAL; 3ie; controversy about external validity |
| 2010s–present | Learning and complexity | Developmental evaluation; PDIA; adaptive management; systems thinking; MEL as culture not compliance |
| Principal | What They Want from MEL |
|---|---|
| International donors (FCDO, USAID, EU) | Standardised indicators; logframe compliance; outcomes data; value for money; gender disaggregation |
| Indian philanthropies (TATA, Azim Premji, Gates) | Theory-based evaluations; learning focus; innovation evidence; scale-readiness assessment |
| CSR funders | Output counts for reporting; beneficiary stories; visibility; annual reporting cycle |
| Government (MoU partners, state govt) | Alignment with government schemes; convergence data; MIS integration; political sensitivity |
| Communities | Accountability to them; participation in assessment; feedback on what isn't working; recognition |
| Programme Area | Output (what happened) | Outcome (what changed) |
|---|---|---|
| Education | 500 children enrolled in school | 500 children reading at grade level by age 10 |
| Health | 10,000 bed nets distributed | Malaria incidence reduced 30% in target area |
| Livelihoods | 200 women received seed capital | 200 women with sustainable income above poverty line after 2 years |
| WASH | 500 toilets constructed | 500 households using toilets consistently (ODF status maintained) |
| Agriculture | 300 farmers trained on SRI | 250 farmers practising SRI with documented yield improvement |
| Assumption Type | Example | How to Test |
|---|---|---|
| Behavioural | Participants will attend training if offered | Pilot; attendance data; FGDs on barriers |
| Contextual | Government services are accessible and functional | Baseline mapping; key informant interviews |
| Causal | Knowledge change leads to behaviour change | Literature review; tracer studies post-training |
| Social norm | Husbands will support wives' participation | Formative research; KAP surveys; FGDs |
| Market | Buyers will pay premium for quality produce | Market research; pilot sales; price surveys |
| Mechanism | How It Works | Example |
|---|---|---|
| Information / KAP | Knowledge → attitude → practice: people change behaviour when they know better. Weak for entrenched behaviour. | Health education campaigns; nutrition counselling |
| Economic incentives | Behaviour changes when it is more profitable or cheaper. Effective for market-oriented change. | Conditional cash transfers; insurance products; market linkages |
| Norm change | Social norms drive behaviour. Change requires critical mass shifts in what is seen as acceptable. | Community conversation models; SASA!; Tostan |
| Power shift | Marginalised groups gain power (economic, political, social) to claim rights and resources. | SHG federations; collective bargaining; rights-based advocacy |
| Systems change | Changes in rules, policies, and institutions create enabling environment for improved outcomes. | Policy advocacy; institutional strengthening; market systems development |
| Hierarchy | Indicators | Means of Verification | Assumptions |
|---|---|---|---|
| Goal / Impact Long-term change | How we know goal is achieved | Data sources for goal indicators | Conditions beyond programme for goal to follow from purpose |
| Purpose / Outcome Why we're doing this | How we know purpose is achieved | Data sources for purpose indicators | Conditions beyond programme for purpose to contribute to goal |
| Outputs What we produce | How we know outputs are achieved | Data sources for output indicators | Conditions beyond programme for outputs to lead to purpose |
| Activities What we do | Inputs required | Budget lines | Conditions beyond programme for activities to produce outputs |
| Framework | Used By | Key Feature |
|---|---|---|
| Logframe | EU, DFID/FCDO (older), most bilateral donors | 4×4 matrix; vertical and horizontal logic; assumptions column |
| Results Framework (RF) | USAID | Hierarchical results pyramid; strategic objectives, intermediate results; performance management plan |
| Performance Framework | USAID/PEPFAR | Indicator-focused; progress vs target tracking; less causal theory |
| Impact Map | Social enterprise; SROI | Maps from activities to outputs to outcomes to impacts; used for Social Return on Investment |
| Outcome Harvesting | Complex, systems programmes | Identifies outcomes that have occurred; works backwards to contribution; no pre-specified indicators |
| Result Level | Statement | Indicator | Baseline | Target | Data Source | Timeline |
|---|---|---|---|---|---|---|
| Impact | Reduced rural poverty in target districts | % HHs below poverty line | 42% | 32% | Household survey | Year 5 |
| Outcome | Improved agricultural income | Avg annual HH farm income (₹) | ₹42,000 | ₹58,000 | Household survey | Year 3 |
| Output | Farmers adopt climate-resilient varieties | % farmers using improved seed | 18% | 60% | Adoption survey | Year 2 |
| Activity | Training and demo plots established | No. of demo plots operational | 0 | 50 | Field register | Year 1 |
| Element | Quality Standard | Common Failure |
|---|---|---|
| Theory of Change | Explicit, evidence-based, tested with field team and community | Missing entirely; or logframe passed off as ToC |
| Levels of results | Clear distinction between activity, output, outcome, impact with appropriate indicators at each | Activities mislabelled as outputs; outputs as outcomes |
| Indicators | 5–7 per level maximum; SMART; gender-disaggregated; based on evidence | 50+ indicators; unmeasurable; activity counts at outcome level |
| Baseline | Collected before programme start; same sampling and tools as endline; disaggregated | Not collected; collected late; not disaggregated |
| Targets | Evidence-based; realistic; midterm and endline; includes decline scenarios | Aspirational; no midterm; no process for revision |
| Means of verification | Specific, accessible, reliable data source named for each indicator | "Field reports" or "project records" — not specific enough |
| Assumptions | Named; testable; with monitoring plan for critical assumptions | Blank; or generic ("stable political context") |
| Type | What It Measures | When to Use | Limitation |
|---|---|---|---|
| Quantitative | Numeric counts, percentages, averages, rates | Outputs, service coverage, income, assets | Doesn't capture quality, process, or meaning |
| Qualitative | Experiences, perceptions, processes, meanings | Behaviour change, empowerment, satisfaction, norms | Harder to aggregate; requires skilled collectors; more expensive |
| Proxy | An observable variable used to represent a harder-to-measure concept | When direct measurement is not feasible — e.g. "nights slept under net" as proxy for malaria prevention behaviour | Proxy may not actually reflect the underlying concept accurately |
| Process | Quality and fidelity of implementation | Understanding why outcomes did or didn't occur; intervention fidelity | Easy to mistake for outputs; needs clear quality standards |
| Sentinel | A few key indicators tracked continuously for early warning | Rapid feedback loops; detecting programme drift early | May miss slow-moving changes not captured by sentinel points |
| Standard Indicator | Gender-Responsive Version |
|---|---|
| % of HHs with savings | % of women with savings in their own name with independent access |
| % of farmers adopted improved seed | % of female-headed HHs AND % of women in male-headed HHs with own decision-making on seed choice |
| % of children completing school | % by gender; AND reasons for dropout disaggregated by gender |
| HH income increased | Women's control over HH income; women's independent income; intra-HH resource allocation |
| Community participation in planning | Women's meaningful participation (not just attendance); women's proposals adopted in community plans |
| Domain | Standard Indicator | Source |
|---|---|---|
| Health | Institutional delivery rate; ANC visits ≥4; stunting prevalence (HAZ <−2) | NFHS; HMIS |
| Education | Gross/Net Enrollment Ratio; foundational literacy rate; dropout rate | UDISE+; ASER |
| Poverty | Consumption per capita (MPI); multidimensional poverty index (OPHI) | NSSO; NFHS |
| Agriculture | Crop yield (kg/acre); cropping intensity; area under irrigation | State Agri Dept; ICRISAT |
| WASH | HHs with piped water; ODF status; HHs with handwashing facility with soap | NFHS; SBM MIS |
| Sampling Method | When to Use | Key Requirement |
|---|---|---|
| Simple Random Sampling | When complete list of population is available | Complete sampling frame (beneficiary list) |
| Systematic Random | Large lists; every nth element | Ordered list; no hidden patterns every n elements |
| Stratified Random | When subgroups (women, Adivasi, different districts) must be represented | Known stratum sizes; strata must be non-overlapping |
| Cluster Sampling | Geographically dispersed populations; efficiency | Clusters (villages) sampled, then households within; increases variance |
| Purposive | Qualitative studies; selecting information-rich cases | Clear criteria for selection; acknowledge non-representativeness |
| Method | What It Produces | Best For |
|---|---|---|
| Participatory mapping | Community-drawn maps of resources, risks, services | Baseline mapping; infrastructure access; hazard exposure |
| Seasonal calendar | Annual pattern of workload, income, food security, illness | Understanding livelihood seasonality; programme timing |
| Trend lines | Community assessment of change over time (10–20 years) | Historical context; community-perceived change on key dimensions |
| Venn diagram | Community's perception of local institutions and relationships | Power mapping; stakeholder analysis; governance assessment |
| Most Significant Change (MSC) | Stories of the most significant changes experienced | Complex programmes; capturing unexpected outcomes; human face of impact |
| Tool | Type | Strengths | Limitations |
|---|---|---|---|
| KoBoToolbox | Mobile survey (free) | Offline capable; skip logic; free; widely used by NGOs | Limited dashboarding; storage limits on free tier |
| ODK Collect | Mobile survey (open source) | Highly customisable; open source; works offline | Requires server setup; technical capacity needed |
| CommCare | Case management + surveys | Longitudinal tracking; health worker workflow | Expensive; implementation complexity |
| DHIS2 | Health information system | Government standard in many states; aggregate reporting | Not designed for programme-level MEL |
| Google Forms / Sheets | Survey + database | Free; familiar; quick deployment | No offline; limited validation; data security concerns |
| Type | Primary Question | When |
|---|---|---|
| Formative | Is the programme design right? Are we reaching the right people? What needs to be adjusted? | Early implementation; piloting |
| Process / Implementation | Is the programme being implemented as designed? With what quality? | Mid-implementation; fidelity assessment |
| Outcome | Did the programme achieve its outcomes? For whom? | Near or after completion |
| Impact | Did the programme cause the outcomes? What would have happened without it? | After completion; requires counterfactual |
| Economic | Was the programme cost-effective? What is its cost per unit of outcome? | With cost data; comparison available |
| Developmental | Are we learning enough to adapt? Is the ToC still valid? | Complex, long-term programmes |
| Design | Mechanism | Requirement |
|---|---|---|
| Difference-in-Differences (DiD) | Compare change in treatment vs control group over time — the "double difference" | Parallel trends assumption; baseline data for both groups |
| Regression Discontinuity (RDD) | Compare just above and below an eligibility threshold — treat threshold as quasi-random | Sharp eligibility cutoff; running variable measurable |
| Instrumental Variables (IV) | Find a variable that predicts programme participation but doesn't directly affect outcome | Valid instrument (rare); strong first stage |
| Propensity Score Matching (PSM) | Match treated units with untreated units with similar observable characteristics | All confounders measured; strong on observables |
| Synthetic Control | Construct a weighted "synthetic" comparison from multiple control units | Pre-treatment data for treatment and potential controls; few treated units |
| If your primary question is... | Consider... |
|---|---|
| What happened and did we reach our targets? | Pre-post design; quantitative survey + qualitative |
| Did we cause this change? | RCT (if feasible) or quasi-experimental (DiD, RDD) |
| Why did/didn't outcomes occur? | Process evaluation; qualitative methods; contribution analysis |
| Is our ToC still valid? | Theory-based evaluation; outcome harvesting |
| What's the value for money? | Cost-effectiveness or cost-benefit analysis |
| How can we improve this year? | Formative evaluation; rapid qualitative assessment |
| System | Change Type | Indicator Approach |
|---|---|---|
| Policy | New legislation enacted; policy implemented with budget | Policy mapping; implementation tracker; budget analysis |
| Market | New market actors, rules, or norms emerge | DCED standards; market system mapping at baseline and endline |
| Norms | Shift in what is considered acceptable behaviour | Social norms measurement tools (EMERGE); longitudinal qualitative |
| Power | Marginalised groups have greater voice and agency | Power analysis frameworks; Most Significant Change; network analysis |
| Institutional | Government systems deliver for excluded groups | Service delivery scorecards; beneficiary feedback; PEFA assessments |
| Type | Development Examples |
|---|---|
| Positive unintended | Women's SHG programme improves men's alcohol reduction (peer pressure from wives); school feeding programme improves sibling enrolment (family strategy) |
| Negative unintended | Microfinance increases women's access to credit but also increases domestic violence when husbands lose control of finance; WASH programme builds toilets but women still use open spaces (safety at night) |
| Displacement | Programme provides jobs in target area; unemployed from neighbouring areas migrate in, displacing programme beneficiaries |
| Dependency | Emergency food distribution over multiple years reduces households' agricultural investment — because they anticipate free food |
| Elite capture | Community-based targeting of benefits is captured by local elites who control access; most marginalised excluded |
| Chart Type | Best For | Common Misuse |
|---|---|---|
| Bar chart | Comparing quantities across categories | Too many bars; non-zero axis baseline inflating differences |
| Line chart | Trends over time; continuous data | Used for unconnected categories; cherry-picked time period |
| Pie chart | Parts of a whole — with ≤5 segments | Too many segments; 3D effects that distort area |
| Scatter plot | Relationship between two variables | Without a regression line or correlation statistic when one is implied |
| Map / choropeth | Geographic variation in indicators | Using raw counts instead of rates; misleading colour scales |
| Audience | What They Care About | Format & Length | Language & Framing |
|---|---|---|---|
| Donor / funder | Compliance with indicators; value for money; lessons for future funding | Formal report, 30–60 pages; PPT summary; structured findings | Technical; aligned to donor frameworks (OECD-DAC); quantitative where possible |
| Programme leadership | What to do differently; strategic decisions; risks to address | Management brief, 5–10 pages; presentation; actionable recommendations | Direct; decision-oriented; findings ranked by significance |
| Field teams | What is working and what isn't in their area; practical changes | Visual one-pager; verbal debrief; local-language summary | Simple; concrete examples from their context; focus on what they control |
| Communities | What happened with their data; whether they will be heard; practical changes to the programme | Community meeting; visual display; verbal presentation in local language | Non-technical; participatory; feedback invited; specific to their context |
| Government | Alignment with government programmes; evidence for policy; scalability | Policy brief, 2–4 pages; meeting presentation | Aligned to government priorities; evidence quality emphasised; scale potential |
| Media / public | Human interest; sector significance; accountability | Press release; infographic; story-led summary | Plain language; specific numbers; compelling case studies |
| Resource | What It Provides | Access |
|---|---|---|
| BetterEvaluation.org | Comprehensive, free database of evaluation methods, tools, and examples | betterevaluation.org — free |
| 3ie Evidence Portal | Systematic reviews and impact evaluations; sector evidence base | 3ieimpact.org — free |
| KoBoToolbox | Free mobile data collection; up to 10,000 submissions/month | kobotoolbox.org — free for NGOs |
| NFHS district data | Baseline data for health and women's empowerment indicators at district level | rchiips.org — free download |
| Harvard PDIA toolkit | PDIA approach materials; guides for problem-driven adaptive work | buildingstatecapability.hks.harvard.edu — free |
| Term | Definition |
|---|---|
| Theory of Change | Causal hypothesis about how programme activities lead to intended impact, with explicit assumptions |
| Logframe | 4×4 planning matrix linking hierarchy of results to indicators, means of verification, and assumptions |
| Counterfactual | What would have happened in the absence of the programme — the fundamental question in impact evaluation |
| Attribution | Degree to which outcomes can be ascribed to the programme rather than to other factors |
| Contribution | The programme's role in producing outcomes alongside other factors — honest alternative to attribution claims |
| Baseline | Pre-programme measurement of outcome indicators, collected before the intervention begins |
| SMART indicator | Specific, Measurable, Achievable, Relevant, Time-bound |
| Goodhart's Law | When a measure becomes a target, it ceases to be a good measure |
| RCT | Randomised Controlled Trial — gold standard experimental design for causal attribution |
| DiD | Difference-in-Differences — quasi-experimental design comparing change in treatment vs control groups over time |
| Term | Definition |
|---|---|
| OECD-DAC criteria | Six evaluation criteria: relevance, coherence, effectiveness, efficiency, impact, sustainability |
| Outcome harvesting | Evaluation method that identifies outcomes that occurred and works backwards to assess contribution |
| MSC | Most Significant Change — participatory method collecting stories of significant change from participants |
| Realist evaluation | Context + Mechanism + Outcome — asks what works for whom in what circumstances |
| PDIA | Problem-Driven Iterative Adaptation — iterative, learning-based approach to complex development challenges |
| CEA | Cost-Effectiveness Analysis — cost per unit of outcome achieved |
| Data quality | Five dimensions: validity, reliability, timeliness, precision, integrity (USAID DQA framework) |
| Formative evaluation | Evaluation during implementation to improve the programme — contrasted with summative (did it work?) |
| Adaptive management | Structured practice of adjusting programme design based on evidence from monitoring and evaluation |
| MEAL | Monitoring, Evaluation, Accountability, and Learning — adds explicit accountability to communities |
| MEL Stage | What You Need | Free Tools / Resources | When to Use It |
|---|---|---|---|
| Design | ToC development; results framework; indicator selection | USAID Iris indicator library; BetterEvaluation; ODI's RAPID framework | Before programme start — non-negotiable |
| Baseline | Survey design; sampling; data collection; analysis | KoBoToolbox; ODK; NFHS district data; Census; PLFS | Before programme activities begin with target population |
| Monitoring | Regular output tracking; MIS; field data collection | KoBoToolbox; Google Sheets; DHIS2; CommCare | Ongoing throughout programme — monthly at minimum |
| Qualitative assessment | FGD guides; KII guides; MSC collection; observation protocols | BetterEvaluation method database; ODI RAPID qualitative tools | Quarterly or as needed for deeper understanding |
| Mid-term evaluation | Mixed methods; quantitative progress review; qualitative deep dive | OECD-DAC criteria; process tracing templates; contribution analysis guide | At programme midpoint — typically Year 2 of a 4-year programme |
| Endline evaluation | Full outcome survey; comparison analysis; economic analysis | Impact Evaluation in Practice (World Bank — free); GiveWell CEA framework | At or near programme close |
| Learning | After-action review templates; learning notes; KM platform | USAID CLA toolkit; Harvard PDIA materials; ALNAP learning tools | Ongoing; structured review events quarterly and annually |
| Source | Coverage | Access |
|---|---|---|
| data.gov.in | National open data portal — thousands of government datasets across sectors | data.gov.in — free |
| NSO / MoSPI | National Statistics Office — NSSO surveys, census, PLFS, CES | mospi.gov.in — free microdata |
| IIPS / NFHS | National Family Health Survey — health, nutrition, women's empowerment by district | rchiips.org — free download |
| Devdatalab | Economic research data for India — Asher, Novosad and colleagues | devdatalab.org — free |
| SHRUG | Socioeconomic High-resolution Rural-Urban Geographic dataset for India — village-level data 1990–2011 | devdatalab.org/shrug — free |
| Role Level | Typical Responsibilities | Skills Needed |
|---|---|---|
| Field MEL Officer | Data collection; tool administration; community interaction; data entry and basic analysis | Survey administration; KoBoToolbox/ODK; local language; data entry |
| MEL Coordinator / Analyst | Indicator tracking; report drafting; results framework management; basic statistical analysis | Excel/SPSS; report writing; logframe management; qualitative methods |
| MEL Manager | MEL system design; evaluation commissioning; donor reporting; team management | Evaluation design; quantitative and qualitative methods; ToC facilitation; donor frameworks |
| Research / Evaluation Specialist | Impact evaluation design; advanced statistical analysis; methods development; sector knowledge generation | R/Stata/Python; experimental and quasi-experimental methods; academic writing; research ethics |
| # | Diagnostic Question | Red Flag if... | Good Sign if... |
|---|---|---|---|
| 1 | Does the programme have an explicit Theory of Change? | It's a diagram with no explanatory text, or doesn't exist | It's written, was developed with field staff, and is used in reviews |
| 2 | How many indicators are in the results framework? | More than 30 indicators total | Under 20, with clear rationale for each |
| 3 | Was baseline data collected before the programme? | No baseline; or collected 6+ months after start | Collected before first beneficiary contact; disaggregated |
| 4 | Are outcomes measured, or only outputs? | All indicators are training counts, meetings held, materials distributed | At least 3 outcome-level indicators with measurement plan |
| 5 | Who uses the monitoring data? | "We send it to the donor in the quarterly report" | "We review it monthly in team meetings and it triggers decisions" |
| 6 | Has an external evaluation been conducted? | Never; or the "evaluation" was a self-assessment with no independence | External evaluation with ToR; findings shared with management; action plan exists |
| 7 | Are negative findings shared with stakeholders? | Reports show only achievements against targets | Reports include what didn't work and why |
| 8 | Is data disaggregated by gender and other equity dimensions? | All data is aggregate; no breakdown by gender, caste, geography | Standard disaggregation by gender; additional by caste/disability where relevant |
| 9 | Is there a functional community feedback mechanism? | A suggestion box that was installed but nobody monitors | Multiple channels; response protocols; data reviewed in programme meetings |
| 10 | When the programme last changed because of evidence? | "I can't think of a time" or "the donor wouldn't allow it" | Specific example in the last 12 months, with documentation |
| Level | Statement | Indicators | Means of Verification | Assumptions |
|---|---|---|---|---|
| Goal / Impact | Improved economic security and wellbeing for women in target districts of MP | % women HHs below poverty line (NSSO definition); Women's MPI score | Household survey (endline Y5); MPI calculation from survey | Broader economic environment remains stable; no major shocks |
| Purpose / Outcome | Women have sustained income from own enterprises, with independent control over earnings | % women with own enterprise operational at 18 months; % women with independent bank account; Avg monthly net income (₹) | Enterprise tracking survey (Y2, Y4); bank account verification; income survey | Markets remain accessible; women retain control over earnings; husbands supportive |
| Output 1 | Women trained in business skills and linked to credit | No. women completing 6-week training; % completing with 70%+ attendance; % accessing credit within 3 months | Training attendance register; assessment records; MFI disbursement records | Women can attend training (mobility; childcare); MFI partnership functional |
| Output 2 | Market linkages established for trained women | No. buyer linkages established; % women with at least 1 formal buyer relationship | Sales records; buyer contracts; field observation | Buyers maintain commitment; transport infrastructure accessible |
| Activities | Training delivery; business plan support; market linkage facilitation; ongoing mentoring | Inputs: ₹1.8Cr budget; 6 field staff; 3 district offices | Financial reports; HR records | Adequate staff recruited and retained; budget released on schedule |
| Indicator | Level | Definition | Baseline | Y2 Target | Y4 Target | Data Source | Frequency | Responsible |
|---|---|---|---|---|---|---|---|---|
| Women HHs below poverty line (%) | Impact | HHs where women are primary earner, below NSSO poverty line | 48% | — | 35% | HH survey (primary) | Baseline; Y4 | MEL Manager |
| Women with own enterprise operational at 18mo (%) | Outcome | Enterprise operating with positive revenue for 3+ consecutive months at 18-month mark | 12% | 55% | 65% | Enterprise tracker (primary) | Quarterly | MEL Coordinator |
| Women with independent bank account (%) | Outcome | Account in own name with independent access, verified by passbook review | 31% | 70% | 80% | Passbook verification | Y1; Y2; Y4 | Field MEL Officer |
| No. women completing training | Output | Completed 6-week course with ≥70% attendance, as per attendance register | 0 | 600 | 1,200 | Training register | Monthly | Field Officer |
| % women with credit access within 3mo | Output | Loan disbursed by partner MFI within 3 months of training completion | 0% | 60% | 70% | MFI records | Quarterly | MEL Coordinator |
| Tool | What It Measures | When to Use |
|---|---|---|
| WEAI (Women's Empowerment in Agriculture Index) | 5 domains: decisions, resources, income, leadership, time | Agricultural programmes; baseline and endline |
| GiHA (Gender in Humanitarian Action) | Gender-specific needs and capacities in emergency contexts | Humanitarian response; rapid assessment |
| Oxfam Gender at Work framework | Formal/informal rules and individual/social change across two axes | Women's rights and empowerment programmes |
| Social Norms Assessment (SoNA) | Prevailing social norms affecting women's behaviour; peer expectations | Behaviour change programmes; SBCC evaluation |
| EMERGE standards | Evidence and Methods for Gender Responsive Evaluation — comprehensive framework | Any evaluation with significant gender dimensions |
| Level | What to Build | How |
|---|---|---|
| Individual | Technical skills: survey design, data analysis, report writing, evaluation methods | Courses (ImpactMojo MEL); mentoring; peer learning; practice with feedback |
| Team | Shared understanding of MEL purpose; collective data interpretation; learning culture | Team-based learning events; joint data reviews; after-action reviews; cross-team learning |
| Organisation | MEL systems; data governance; learning processes; leadership that values evidence | MEL system audit; MEL plan development; leadership engagement; funder conversations about learning |