ImpactMojo
AI for Impact: Data Monitoring & Evaluation in Development
When AI Helps, When It Doesn't, and How to Tell the Difference
A rigorous, evidence-based exploration of AI applications in development M&E—from computer vision and NLP to algorithmic targeting and real-time monitoring. With deep focus on South Asia and Africa, where context determines everything.
Why Study AI in Development M&E?
AI is reshaping how development organizations collect data, target beneficiaries, and monitor programs. But the gap between vendor promises and ground reality is vast. Organizations waste millions on tools that don't work in low-connectivity environments, or worse, deploy algorithms that systematically exclude the most vulnerable.
This course differs from typical AI hype in crucial ways: we focus on what actually works in low-resource contexts, when simpler tools outperform AI, and how to assess whether your organization is ready for AI adoption—or whether the investment would be wasted.
Evidence-Based Assessment
Move beyond vendor demos to rigorous evaluation. Learn to assess tools using development research standards, not Silicon Valley metrics.
Context-Specific Application
What works in Accra may fail in Upper East Region. Deep focus on infrastructure constraints, data quality challenges, and organizational capacity.
Ethical Frameworks
Algorithmic bias, data sovereignty, consent in low-literacy contexts. The ethical dimensions that vendor pitches never mention.
"The question is not whether AI can help development—it clearly can, in specific contexts. The question is whether your organization is ready to use it responsibly, and whether simpler solutions might work better." — Adapted from J-PAL AI & Development Initiative
The AI-M&E Landscape
What does "AI" actually mean in development practice? This module demystifies the taxonomy of tools—from simple automation to machine learning to large language models—and maps the current state of AI adoption in the sector.
Taxonomy of AI Technologies in Development
The term "AI" is used loosely in development contexts, often conflating fundamentally different technologies. Clear taxonomy is essential for appropriate tool selection.
| Technology | What It Does | M&E Applications | Data Requirements |
|---|---|---|---|
| Rule-Based Automation | Follows explicit if-then rules | Data validation, skip logic, alerts | Low—rules defined manually |
| Classical ML | Learns patterns from labeled data | Targeting, classification, prediction | Medium—thousands of labeled examples |
| Deep Learning | Neural networks for complex patterns | Image recognition, NLP, anomaly detection | High—millions of examples, GPUs |
| Computer Vision | Extracts information from images | Satellite imagery, infrastructure monitoring | High—labeled images, geospatial data |
| NLP | Processes human language | Qualitative coding, sentiment, translation | Medium-High—domain-specific corpora |
| LLMs (GPT, Claude) | General-purpose text generation | Report writing, data synthesis, chatbots | Low for use; high for fine-tuning |
ML excels at prediction—identifying who is likely to be poor, which programs are at risk of failure. But prediction ≠ causation. Knowing that households with tin roofs are poor doesn't tell you whether providing tin roofs reduces poverty. Development requires both—ML for targeting and monitoring, RCTs for causal inference.
Case Study: Ghana's LEAP Program
Ghana's Livelihood Empowerment Against Poverty (LEAP) program illustrates both the promise and challenges of AI in social protection:
LEAP uses proxy means testing (PMT) with ML-enhanced models to identify eligible households. Initial models trained on 2012 census data showed 70%+ accuracy in identifying the poor. But when verified against the 2017 census, accuracy dropped to 55%—worse than random selection in some districts. Why? Economic conditions changed, and the model couldn't adapt.
Check Your Understanding
The key difference between classical ML and deep learning is:
Multiple ChoiceWhy did Ghana's LEAP targeting model accuracy decline from 70% to 55% over 5 years?
Reflection
Varna
Feeling overwhelmed by the AI landscape? I've evaluated dozens of AI tools for development organizations and can help you cut through the hype. Let's discuss which approaches actually make sense for your context.
Needs Assessment for AI Integration
Before adopting any AI tool, organizations must assess readiness across multiple dimensions: data infrastructure, technical capacity, organizational culture, and—critically—whether AI is actually the right solution.
The AI Readiness Framework
Most AI failures in development aren't technical—they're organizational. A tool that works brilliantly in a pilot often fails at scale because the underlying conditions weren't in place.
Data Infrastructure
Do you have clean, consistent, accessible data? Most organizations underestimate data cleaning costs (typically 60-80% of AI project time).
Technical Capacity
Who will maintain the system after the consultant leaves? AI requires ongoing maintenance, model updates, and troubleshooting.
Organizational Culture
Is leadership committed to data-driven decisions? Will staff trust algorithmic recommendations?
Infrastructure Context
Connectivity, power, devices—do field conditions support the tool? What's the fallback when systems fail?
1. Can a simpler solution work? (Excel formulas, conditional logic, rule-based systems)
2. Is the problem well-defined? (Vague problems produce vague AI)
3. Do you have enough data? (Hundreds of examples minimum; thousands preferred)
4. Is the data representative? (Models trained on urban data fail in rural contexts)
5. Can you afford to be wrong? (AI errors have consequences—what's the cost?)
6. Can you explain decisions? (Many contexts require human-interpretable reasoning)
Case Study: India's PM-KISAN Targeting
India's PM-KISAN program provides ₹6,000/year to 110 million farming families. The program explored AI-based targeting to identify eligible farmers from land records, satellite imagery, and existing databases.
The verdict: AI added limited value. Land records were incomplete (only 60% of farmers have clear titles). Satellite imagery couldn't distinguish owner-cultivators from tenant farmers. In the end, self-declaration with Aadhaar verification proved simpler and nearly as accurate—at a fraction of the cost.
- Björkegren, D. & Grissen, D. (2020). "Behavior Revealed in Mobile Phone Usage Predicts Credit Repayment." World Bank Economic Review.
- Muralidharan, K. et al. (2016). "Building State Capacity: Evidence from Biometric Smartcards in India." AER.
- World Bank. (2021). "AI in Development: A Framework for Readiness Assessment."
Check Your Understanding
According to the AI readiness framework, which factor is MOST critical before implementing AI in M&E systems?
Multiple ChoiceThe PM-KISAN case study demonstrates that AI targeting failed primarily because:
Multiple ChoiceYour organization is considering AI for beneficiary targeting. You have 500 records from a 2019 survey and limited internet connectivity in field offices. How would you assess AI readiness, and what would you recommend?
Reflection
Vandana
Readiness assessments can feel abstract. Our workshops walk you through a practical framework for evaluating whether AI makes sense for your organization—and what to do if the answer is "not yet."
AI for Data Collection
From voice-to-text transcription to intelligent chatbots, AI is transforming how development organizations collect data in the field. But implementation challenges—language diversity, connectivity, trust—determine success or failure.
AI-Enhanced Data Collection Tools
| Tool Type | How It Works | Examples | Best For |
|---|---|---|---|
| Voice-to-Text | Converts spoken responses to text | Google Speech API, Whisper | Qualitative data in low-literacy contexts |
| IVR Systems | Interactive voice response surveys | Viamo, Premise, Echo Mobile | High-frequency, simple surveys at scale |
| Chatbots | Conversational data collection | UNICEF U-Report, WhatsApp bots | Youth engagement, rapid polling |
| Translation Tools | Real-time language translation | Google Translate, Meta NLLB | Multi-language survey deployment |
| Image Recognition | Extracts data from photos | Computer vision for receipts, crops | Verification, agricultural monitoring |
Case Study: UNICEF's U-Report in Uganda
U-Report is a free SMS and social media platform that allows young people to speak out on issues affecting their communities. With 13 million users across 68 countries, it demonstrates AI-enhanced data collection at scale.
Language Challenges in South Asia and Africa
India: 22 Official Languages
Hindi speech recognition is decent; Bhojpuri, Maithili, Chhattisgarhi have almost no training data. Dialects within states vary significantly.
Ghana: 80+ Languages
Akan and Ewe have some support; Dagbani, Gonja, Frafra are severely under-resourced. Code-switching is near-universal.
Bangladesh: Dialect Variation
Standard Bengali works; regional dialects (Sylheti, Chittagonian) differ significantly and lack training data.
Check Your Understanding
Automated speech-to-text transcription in South Asian languages faces which primary limitation?
Multiple ChoiceU-Report's success in engaging youth through chatbots demonstrates that AI in data collection works best when:
Multiple ChoiceYou're designing a household survey in rural Bihar where 40% of respondents speak Bhojpuri (no commercial ASR available). What hybrid approach would you propose for data collection?
Reflection
Varna
Designing AI-augmented data collection systems requires balancing technological possibilities with ground realities. I've helped organizations navigate these trade-offs—let's discuss what makes sense for your context.
Computer Vision & Geospatial Analysis
Satellite imagery combined with machine learning has revolutionized poverty mapping, agricultural monitoring, and infrastructure tracking. But the gap between research papers and operational use remains significant.
Applications in Development
Poverty Mapping
Jean et al. (2016) showed satellite imagery can predict poverty at village level with r² > 0.7. World Bank uses this for targeting in data-sparse regions.
Agricultural Monitoring
Crop yield estimation, drought early warning, land use change detection. NASA's FEWS NET and USAID use these for food security alerts.
Infrastructure Tracking
Road quality assessment, building detection, electrification mapping. Useful for monitoring construction programs at scale.
Humanitarian Response
Damage assessment after disasters, refugee camp monitoring, population displacement tracking.
Method: Train a CNN on nighttime light imagery (proxy for economic activity), then use transfer learning to extract features from daytime satellite images.
Results: Explained 75% of variation in consumption expenditure in Nigeria, Tanzania, Uganda, Malawi, and Rwanda—comparable to expensive household surveys.
Implication: Poverty mapping in areas without recent survey data becomes feasible. But models require local calibration and don't capture inequality within villages.
- Jean, N. et al. (2016). "Combining satellite imagery and machine learning to predict poverty." Science 353(6301).
- Burke, M. et al. (2021). "Using satellite imagery to understand and promote sustainable development." Science.
- Chi, G. et al. (2022). "Micro-estimates of wealth for all low- and middle-income countries." PNAS.
Check Your Understanding
Jean et al.'s groundbreaking 2016 paper demonstrated that satellite imagery can predict poverty by:
Multiple ChoiceA key limitation of satellite-based poverty prediction for targeting social programs is:
Multiple ChoiceYour drought early warning system uses NDVI (vegetation index) from satellites. A district shows declining NDVI but ground reports indicate normal conditions. What might explain this discrepancy, and how would you validate?
Reflection
Varna
Geospatial analysis is one of the most promising AI applications for development—and one of the most misunderstood. I can help you assess whether satellite imagery makes sense for your monitoring needs.
NLP for Qualitative Data
Natural Language Processing can analyze thousands of open-ended survey responses, interview transcripts, and social media posts. But automated coding is not a replacement for human interpretation—it's a complement.
NLP Applications in M&E
| Application | What It Does | Accuracy | Human Oversight Needed? |
|---|---|---|---|
| Topic Modeling | Identifies themes in large text corpora | Good for exploration | High—topics need human labeling |
| Sentiment Analysis | Classifies text as positive/negative | 70-85% for major languages | Medium—review edge cases |
| Named Entity Recognition | Extracts names, places, organizations | 90%+ for English | Low for structured extraction |
| Automated Coding | Assigns codes to qualitative responses | 60-80% agreement | High—cannot replace close reading |
| Summarization | Generates summaries of long texts | Fluent but may miss nuance | High—verify key points |
NLP excels at scale—processing 10,000 survey responses that humans couldn't read. But it struggles with context: sarcasm, cultural references, implicit meanings, contradictions within a response. For sensitive topics (GBV, corruption, mental health), human interpretation remains essential. Use NLP to triage and explore; use humans to understand.
Check Your Understanding
When using topic modeling (LDA) for analyzing open-ended survey responses, the model outputs:
Multiple ChoiceA major validity concern when using NLP for beneficiary feedback analysis is:
Multiple ChoiceYou have 2,000 qualitative interview transcripts from an education program. Propose a hybrid approach combining NLP automation with human analysis that maintains interpretive validity.
Reflection
Vandana
NLP tools are evolving rapidly, and it's hard to know what actually works for development contexts. Our workshops include hands-on exercises with real qualitative data—join us to learn practical approaches.
Algorithmic Targeting & Beneficiary Selection
Who gets the transfer? Who receives the scholarship? Algorithmic targeting promises efficiency and objectivity—but can also systematically exclude the most vulnerable.
Traditional vs. ML-Based Targeting
| Method | How It Works | Advantages | Disadvantages |
|---|---|---|---|
| Proxy Means Test (PMT) | Linear regression on asset indicators | Transparent, interpretable | Requires survey; can be gamed |
| Community-Based | Local leaders identify beneficiaries | Local knowledge; legitimacy | Elite capture; bias against marginalized |
| Universal | Everyone in category receives benefit | No exclusion errors; low admin cost | Expensive; may not reach poorest |
| ML on Administrative Data | Uses existing records (phones, taxes) | Low marginal cost; real-time | Data gaps exclude poorest; bias |
| ML on Satellite Imagery | Predicts poverty from spatial features | No household visit needed | Area-level only; can't identify households |
The Algorithmic Bias Problem
Algorithms can encode and amplify existing biases. If training data underrepresents marginalized groups (women, minorities, people with disabilities), the model will systematically underserve them. In development contexts, this is particularly dangerous because the groups most likely to be excluded from data are often those most in need of services.
Gender Bias
Phone-based targeting excludes women (40% gender gap in mobile ownership in South Asia). Asset-based models may miss female-headed households.
Geographic Bias
Models trained on accessible areas perform poorly in remote regions. Urban training data may not transfer to rural contexts.
Documentation Bias
Algorithms using administrative data exclude those without documents—often the most marginalized (refugees, migrants, street children).
Check Your Understanding
GiveDirectly's research on targeting methods found that machine learning models using satellite imagery:
Multiple ChoiceThe distinction between "exclusion errors" and "inclusion errors" in targeting is important because:
Multiple ChoiceAn algorithmic targeting system flags 30% of households as "likely ineligible" but community leaders report these families are among the poorest. How would you investigate this discrepancy and what governance mechanisms would you propose?
Reflection
Vandana
Targeting decisions are among the most consequential in program design. If you're grappling with inclusion/exclusion tradeoffs or evaluating algorithmic approaches, our workshops cover the evidence and practical frameworks.
Real-Time Monitoring & Anomaly Detection
Dashboard automation, data quality flags, and early warning systems. How AI enables faster response to program problems—and the human oversight that remains essential.
AI-Powered Monitoring Systems
Traditional M&E operates on quarterly or annual cycles. AI enables continuous monitoring that can detect problems in days rather than months.
Anomaly Detection
Algorithms flag unusual patterns: sudden drops in attendance, unexpected expenditure spikes, geographic clustering of complaints. UNHCR uses this for fraud detection in cash programs.
Predictive Early Warning
ML models predict which programs are at risk of failure based on early indicators. WFP's HungerMap combines satellite data, market prices, and conflict indicators for food security alerts.
Automated Data Quality
AI identifies suspicious survey responses: impossible combinations, pattern responses, outliers. Reduces reliance on manual data cleaning.
Case Study: WFP's HungerMap LIVE
AI monitoring systems should augment human judgment, not replace it. Algorithms flag potential issues; humans investigate and decide. Fully autonomous systems risk: (1) False positives disrupting operations, (2) False negatives missing real problems, (3) Gaming once patterns are known, (4) Loss of contextual understanding.
Check Your Understanding
WFP's HungerMap uses real-time data integration to:
Multiple ChoiceAnomaly detection in financial flows (like cash transfers) works by:
Multiple ChoiceYour real-time monitoring dashboard shows a spike in "alerts" but field staff are overwhelmed and ignoring most flags. How would you redesign the system to balance sensitivity with actionability?
Reflection
Varna
Real-time monitoring sounds exciting but can overwhelm teams with false positives. I can help you design systems that surface actionable insights without creating alert fatigue—let's talk about your monitoring needs.
AI for Adaptive Programming
Feedback loops, course correction, and predictive analytics for implementation. Moving from static program design to continuous learning.
The Adaptive Management Framework
Traditional programs follow linear designs: plan → implement → evaluate → report. Adaptive management uses continuous data to adjust implementation in real-time.
AI enables adaptive management at scale by processing feedback faster than humans can. But adaptation requires: (1) Clear decision rules for when to adapt, (2) Authority to make changes, (3) Budget flexibility, (4) Organizational culture that accepts iteration.
Applications
Beneficiary Feedback Analysis
NLP analyzes thousands of feedback messages to identify emerging issues. GroundTruth uses this for humanitarian programs across Africa.
Predictive Resource Allocation
ML predicts where resources will have highest impact, enabling dynamic reallocation. GiveDirectly experiments with this for cash transfer timing.
Implementation Risk Scoring
Algorithms score implementing partners on risk indicators, enabling proactive support rather than reactive crisis management.
Check Your Understanding
The core principle of AI-enabled adaptive programming is:
Multiple ChoiceA/B testing in development programs differs from commercial contexts because:
Multiple ChoiceYour adaptive learning system suggests dropping a community mobilization component because quantitative indicators show no effect. Qualitative data suggests it's building important social capital. How do you reconcile these signals?
ReflectionThe Limits of AI in Causal Inference
Why ML ≠ RCT. Prediction vs. causation. Heterogeneity detection. Understanding what AI can and cannot tell us about program impact.
Prediction: ML excels at predicting outcomes—who is poor, which programs will fail, what areas need intervention. But prediction doesn't tell you why.
Causation: To know if a program causes outcomes, you need experimental or quasi-experimental methods. Correlation in ML predictions is not evidence of causal impact.
Implication: Use ML for targeting and monitoring; use RCTs and rigorous evaluation for impact assessment. They're complements, not substitutes.
Where ML Can Support Causal Inference
Heterogeneity Detection
Causal forests and other ML methods can identify which subgroups benefit most from interventions—going beyond average treatment effects.
Covariate Selection
ML can identify which variables to control for in quasi-experimental designs, improving precision without researcher degrees of freedom.
Synthetic Controls
ML-weighted synthetic control methods construct better counterfactuals for case study analysis.
Common Mistakes
Mistake 1: "Our ML model shows the program works" — No, it shows correlation.
Mistake 2: "We don't need an RCT; we have big data" — Big data amplifies biases; it doesn't eliminate them.
Mistake 3: "AI found the causal mechanism" — AI found patterns; humans interpret mechanisms.
Check Your Understanding
The fundamental limitation of ML for impact evaluation is:
Multiple ChoiceCausal forests and heterogeneous treatment effects estimation using ML can help evaluators:
Multiple ChoiceA colleague claims their ML model "proves" a livelihoods program caused income gains because the model accurately predicts beneficiary incomes. Explain why this claim is problematic and what additional evidence would be needed.
Reflection
Varna
The prediction-causation distinction trips up many organizations. If you're designing an evaluation or trying to understand what your data can actually tell you, let's work through it together.
Ethics, Bias & Accountability
Algorithmic fairness, data sovereignty, consent in low-literacy contexts. The ethical dimensions that every development practitioner must understand.
Core Ethical Principles for AI in Development
Do No Harm
AI systems can cause harm through exclusion, discrimination, or privacy violations. "Move fast and break things" is not appropriate when lives are at stake.
Informed Consent
People have the right to understand how their data is used. Consent in low-literacy contexts requires more than a signature—it requires genuine understanding.
Transparency & Explainability
People affected by algorithmic decisions deserve to know how those decisions are made. "Black box" models are often inappropriate in development contexts.
Data Sovereignty
Communities should control their own data. Extractive data practices—collecting data without benefit to communities—replicate colonial patterns.
The Algorithmic Accountability Framework
1. Who benefits? Does the AI serve the organization or the beneficiaries?
2. Who is excluded? What groups might be systematically disadvantaged?
3. Who decides? Where is human judgment required, and who has that authority?
4. Who is accountable? When the algorithm fails, who bears responsibility?
5. Who can appeal? What recourse do people have if they're wrongly classified?
Check Your Understanding
Virginia Eubanks' "Automating Inequality" demonstrates that algorithmic systems in social services:
Multiple ChoiceThe concept of "algorithmic accountability" in development contexts requires:
Multiple ChoiceYour targeting algorithm systematically underserves female-headed households because training data reflects historical exclusion. Propose a technical and governance response that addresses both the immediate bias and structural causes.
Reflection
Vandana
Ethics and accountability in AI aren't just compliance checkboxes—they're design principles. Our workshops help teams build responsible AI frameworks that communities can trust. Let's explore what ethical AI means for your work.
Context Assessment: Case Studies
Deep dives into AI implementation in Ghana, India, Bangladesh, and Kenya. What worked, what failed, and why context determines everything.
Ghana: LEAP Social Protection
Key lessons from Ghana:
• PMT models need frequent updating (5-year-old models lose 20-30% accuracy)
• Community verification catches errors algorithms miss
• Data quality in northern regions is systematically worse—algorithms amplify this
• Political pressure for expansion can override targeting accuracy
India: Aadhaar-Based Targeting
India's Aadhaar system enables biometric verification for 1.4 billion people. Its use in social protection demonstrates both potential and risks.
Aadhaar-based systems exclude those who cannot enroll (elderly with faded fingerprints, manual laborers with worn prints, people with disabilities) or whose biometrics fail to authenticate. Multiple deaths have been linked to denied rations due to authentication failures. Efficiency gains must be weighed against exclusion costs.
Bangladesh: bKash and Financial Inclusion
Mobile money in Bangladesh shows how digital infrastructure enables new forms of targeting—and creates new exclusions.
Kenya: M-Pesa and GiveDirectly
Kenya's mobile money infrastructure enables GiveDirectly's cash transfers and demonstrates how digital rails can reduce transaction costs while maintaining accountability.
Check Your Understanding
The Aadhaar-linked benefit delivery system in India illustrates both AI's promise and risks because:
Multiple ChoiceM-Pesa's success in Kenya suggests that AI/digital systems work best when:
Multiple ChoiceCompare two failed AI implementations you've encountered or read about. What contextual factors did implementers underestimate, and what early warning signs were missed?
Reflection
Vandana
These case studies show that context is everything. If you're adapting AI tools for a new context, our workshops help you anticipate challenges and design appropriate safeguards.
Strategy Building: Build vs. Buy, Pilot Design, Sustainability
Practical guidance for organizations considering AI adoption. How to evaluate vendors, design pilots, build internal capacity, and plan for sustainability.
The Build vs. Buy Decision
| Consideration | Build In-House | Buy/Partner |
|---|---|---|
| Initial Cost | Higher (staff, infrastructure) | Lower (licensing fees) |
| Long-term Cost | Lower if scaled | Higher (recurring fees) |
| Customization | Full control | Limited to vendor options |
| Maintenance | Internal responsibility | Vendor responsibility |
| Data Control | Full ownership | Depends on contract |
| Speed to Deploy | Slower | Faster |
Designing Effective Pilots
1. Define success metrics — What would convince you to scale? What would convince you to stop?
2. Select representative sites — Pilots in easy contexts don't test real-world viability
3. Plan for failure — What's the fallback if AI doesn't work?
4. Document everything — Capture learnings systematically
5. Budget for iteration — First versions rarely work; plan for 2-3 cycles
6. Include sustainability from day one — Who maintains this after the pilot?
Building Internal Capacity
Data Literacy
Train program staff to interpret AI outputs critically. They don't need to build models, but they need to question them.
Technical Capacity
Hire or develop at least one person who can maintain systems, troubleshoot problems, and interface with vendors.
Leadership Buy-in
AI initiatives fail without sustained leadership commitment. Ensure decision-makers understand both potential and limitations.
Check Your Understanding
The "build vs. buy" decision for AI tools should primarily consider:
Multiple ChoiceEffective AI pilot design in development requires:
Multiple ChoiceDesign a 6-month AI pilot for your organization. Include: the problem it solves, success/failure metrics, resource requirements, risk mitigation, and decision criteria for scaling or sunsetting.
ReflectionCapstone Project: AI Integration Assessment
Apply course frameworks to produce a comprehensive AI readiness assessment and implementation plan for a development organization or program of your choice.
Project Overview
The capstone demonstrates your ability to translate theoretical frameworks into practical recommendations for AI adoption in development contexts.
Week 1: Context Analysis
Select an organization or program. Analyze current M&E practices, data infrastructure, and organizational capacity.
Week 2: Opportunity Mapping
Identify specific M&E challenges where AI might add value. Apply the "Is AI necessary?" checklist.
Week 3: Tool Evaluation
Research and evaluate 3-5 potential AI tools or approaches. Assess fit with organizational context.
Week 4: Implementation Plan
Develop a phased implementation roadmap with pilot design, success metrics, and sustainability plan.
Deliverables
- →AI Readiness Assessment (2000 words): Systematic evaluation using course frameworks
- →Tool Comparison Matrix: Structured evaluation of 3-5 potential solutions
- →Implementation Roadmap: Phased plan with timelines, resources, and risk mitigation
- →Ethical Review (500 words): Assessment of ethical considerations and safeguards
Evaluation Criteria
Analytical Rigor (35%)
Accurate application of course frameworks and appropriate use of evidence.
Context Sensitivity (25%)
Understanding of organizational constraints and local conditions.
Practical Feasibility (25%)
Realistic recommendations with attention to implementation challenges.
Communication (15%)
Clarity, organization, and professional presentation.
Varna
Ready to put this learning into practice? Whether you're designing an AI pilot or evaluating an existing system, I'd love to help you think through the strategy. Book a session and let's build something that actually works.
AI for Development Lexicon
Master the vocabulary of AI in development M&E with our comprehensive glossary of 60 essential terms across 8 thematic categories.
Meet the Founders of ImpactMojo
This course is brought to you by two practitioners passionate about democratizing development education.
Varna
Founder & Lead of Learning Design
Development Economist with a PhD, specializing in social impact measurement, gender studies, and development research across South Asia.
Vandana
Co-Founder & Lead of Partnerships
Education and development professional with 15+ years of experience designing impactful learning programs across India.