Why Mixed Methods?
In the world of development evaluation, the debate between quantitative and qualitative approaches has raged for decades. Quantitative purists argue that only numbers can provide the rigour needed for credible evidence. Qualitative champions counter that numbers alone miss the lived experiences that give meaning to development outcomes. Mixed methods evaluation offers a third way — one that harnesses the strengths of both traditions while compensating for their individual weaknesses.
For South Asian development programmes operating in contexts of extraordinary complexity — from the caste dynamics of rural Bihar to the post-conflict landscapes of Sri Lanka — relying on a single methodological tradition is not just limiting, it can be misleading. A randomised controlled trial might tell you that a livelihood programme increased household income by 12%, but it cannot explain why some communities embraced the programme while others resisted it. Conversely, rich ethnographic accounts of programme participation may not tell you whether outcomes can be attributed to the intervention rather than external factors.

Sequential vs Concurrent Designs
The two fundamental architectures for mixed methods research are sequential and concurrent designs, each suited to different evaluation questions and resource constraints.
Sequential explanatory design begins with quantitative data collection and analysis, followed by qualitative inquiry that helps explain or elaborate on the quantitative findings. This is particularly useful when survey results reveal unexpected patterns. For instance, an evaluation of India's Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) might first analyse wage data across districts, then conduct focus groups in outlier districts — those with unusually high or low uptake — to understand contextual factors driving variation.
Sequential exploratory design reverses this order: qualitative research comes first to develop hypotheses, instruments, or typologies, which are then tested quantitatively. This approach works well when evaluating programmes in under-researched contexts. A team evaluating a new adolescent health programme in Nepal's Terai region might begin with participatory workshops to understand local health-seeking behaviours before designing a survey instrument that reflects local realities rather than imported assumptions.
Concurrent or convergent design involves collecting both types of data simultaneously and merging them during analysis. This is the most resource-intensive approach but offers the richest picture. An evaluation of a watershed management programme in Maharashtra might simultaneously conduct a household survey measuring agricultural productivity and ethnographic fieldwork documenting community decision-making processes around water allocation.
"The goal of mixed methods is not simply to collect two types of data, but to integrate them in ways that yield insights neither could produce alone." — John Creswell
The Integration Challenge
The most common failure in mixed methods evaluation is what researchers call "parallel play" — collecting both quantitative and qualitative data but never truly integrating them. The final report contains a statistics chapter and a qualitative findings chapter that sit side by side without speaking to each other. True integration requires deliberate strategies at the design, methods, interpretation, and reporting stages.
At the design level, integration means ensuring that both strands address the same or complementary evaluation questions, grounded in a clear theory of change. At the methods level, it involves techniques like using qualitative themes to create quantitative variables, or selecting qualitative cases based on quantitative results. At the interpretation level, researchers must develop joint displays — matrices or frameworks that bring both data types into conversation. And at the reporting level, findings should be woven together rather than presented in separate sections.
South Asian Examples in Practice
Several landmark evaluations in South Asia demonstrate the power of well-designed mixed methods approaches. The evaluation of Bangladesh's BRAC Ultra-Poor Graduation Programme combined longitudinal survey data tracking economic indicators with detailed case studies of participant households. The quantitative data showed significant income gains, but the qualitative component revealed that the most transformative change was not economic but social — participants reported a shift in self-identity from "dependent" to "productive community member."
In India, the evaluation of the National Rural Health Mission's Accredited Social Health Activist (ASHA) programme used a concurrent design that paired facility-level service delivery data with narrative interviews with ASHAs themselves. The quantitative data showed improvements in institutional delivery rates, while the qualitative data uncovered the informal negotiation strategies ASHAs used to convince reluctant families — insights that were critical for programme scale-up but invisible in the numbers alone.

Practical Considerations for Evaluators
Designing a mixed methods evaluation in South Asia requires careful attention to several practical realities. First, team composition matters enormously. Statisticians and ethnographers often operate with different epistemological assumptions. Building a team that genuinely values both traditions — rather than treating one as supplementary — is essential for quality integration and is a hallmark of a strong MEL culture.
Second, timing and sequencing must account for field realities. Monsoon seasons, harvest periods, election cycles, and festival calendars all affect data collection in South Asia. A sequential design that plans qualitative fieldwork during monsoon season in flood-prone areas of Assam will face serious implementation challenges.
Third, budget allocation should reflect genuine commitment to both strands. Too often, the qualitative component receives a fraction of the budget and is treated as illustrative anecdote rather than rigorous evidence. A credible mixed methods evaluation typically allocates 40-60% of the budget to each strand, with additional resources for integration activities.
Finally, ethical considerations multiply in mixed methods designs. Participants in qualitative components share personal narratives that require careful handling, particularly in contexts where caste, gender, or political dynamics create vulnerability. Informed consent processes must be adapted for each data collection method, and survey instruments must reflect local realities, and confidentiality protocols must account for the identifiability of qualitative participants in small communities.
Getting Started
For evaluation teams new to mixed methods, the best starting point is a clear articulation of what each method will contribute to the evaluation questions. If the quantitative component could answer the question alone, adding qualitative work merely for decoration wastes resources and participants' time. Similarly, if the qualitative inquiry is sufficient, collecting survey data for the appearance of rigour serves no one. Mixed methods should be chosen when — and only when — the evaluation questions genuinely require both types of evidence to be answered adequately. When that condition is met, the resulting evaluation will be far more useful to programme managers, policymakers, and the communities the programme serves. For practical guidance on structuring your evaluation framework, try our MEL Plan Lab.