Organizations operating under fragile contexts often struggle to uphold data quality standards due to insecurity, institutional fragmentation, and limited field access. The COVID-19 pandemic intensified these constraints. It suspended the possibility of direct verification and posed critical questions about the integrity of performance oversight. This research investigates whether remote Data Quality Assessments (DQAs) preserved accountability and verification rigor during this period of operational stress. The research adopts a qualitative case study design and analyzes the remote DQA model implemented across the USAID Somalia portfolio in 2020. The analysis relies on reporting documents, standardized templates, verification protocols, and technical feedback archives to evaluate performance across five data quality dimensions and examine the remote DQA process. It references peer-reviewed studies, donor publications, and evaluation reports from Somalia and similar fragile settings to support contextual interpretation and enable cross-case insight. The research applies thematic content analysis and triangulated document review to assess institutional behavior and the resilience of monitoring systems under constraint. The findings confirm that remote DQAs enabled continuity of oversight and preserved structured verification logic. However, performance in institutional adaptation varied. The research reveals that remote models depend heavily on partner capacity and documentation clarity. Coordination between implementing partners and sub-implementing partners emerged as a strategic determinant of remote verification success. While remote DQAs allowed accountability in non-permissive settings, they could not replicate the contextual depth and diagnostic precision of field-based assessments. The absence of observational evidence hindered the detection of informal practices and constrained verification confidence. The research concludes that remote verification models offer a viable response to operational disruption, but they cannot substitute for the comprehensiveness of hybrid approaches. Hybrid models that combine remote reviews with targeted field visits, once embedded within institutional frameworks, offer a strategic path to reinforce system resilience in fragile and constrained settings. Somalia’s experience highlights the need for donors and implementing partners to institutionalize adaptive oversight mechanisms capable of maintaining data quality under fragility and stress.
Published in | Social Sciences (Volume 14, Issue 4) |
DOI | 10.11648/j.ss.20251404.13 |
Page(s) | 315-331 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2025. Published by Science Publishing Group |
Remote Data Quality Assessment, Fragile Contexts, Monitoring, Evaluation, and Learning, Adaptive Management, Stakeholder Engagement
Standard | Definition |
---|---|
Validity | Data must clearly and adequately represent the intended result, ensuring that what is measured aligns directly with the stated indicator or outcome. |
Integrity | Safeguards must be in place to minimize risks of bias, transcription errors, or intentional manipulation of the data. |
Precision | Data must possess sufficient detail to support sound and informed management decisions, avoiding both overgeneralization and excessive granularity. |
Reliability | Data collection and analysis processes must remain stable and consistent over time to ensure comparability and reproducibility. |
Timeliness | Data must be available at a frequency and currency that allows it to influence timely and effective decision-making processes. |
Research questions | Data sources | Methods and analytical focus |
---|---|---|
1. To what extent did remote DQA processes uphold data quality standards in the absence of field verification? | DQA reports, indicator reference sheets, documentation templates | Qualitative case synthesis and structured analysis of DQA protocols, including benchmarking reported data against USAID ADS 201 standards (validity, reliability, precision, timeliness, integrity), with structured document coding. |
2. How did IPs adjust internal workflows to meet performance verification requirements? | Partner submissions, communications, and technical notes | Thematic analysis of internal partner records and documentation systems, focused on adaptations to workflows and alignment with data quality logic, process flow mapping and content coding. |
3. What procedural and organizational factors enabled or constrained MEL effectiveness under remote conditions? | Internal Standard Operating Procedures (SOPs), technical memos, verification summaries | Pattern tracing and indicator quality standard-linked document review, comparative matrix analysis of enabling and constraining variables, and triangulation across cases. |
4. What evidence of learning and system adaptation emerged during repeated remote DQA cycles? | Sequential DQA documentation, internal feedback, capacity-building records | Content analysis of technical memos and response matrices, applied through process tracing across cycles to capture documentation improvement and institutional learning. |
5. How applicable is the Somalia remote DQA model to other fragile contexts? | Donor strategy documents, global MEL reports, USAID/UN/World Bank publications | Comparative synthesis and strategic distillation using secondary literature, designed to assess generalizability, contextual coherence, and operational relevance in similar fragile environments. |
Step | Description | Primary Data Sources | Secondary Data Sources |
---|---|---|---|
Step 0 | Preparation | Notification letters, indicator selection documents | MEL policy briefs, remote monitoring guidelines |
Step 1 | Desk review and tool finalization | Activity Monitoring, Evaluation, and Learning (AMEL) Plans, DQA tools, Indicator Performance Tracking Table (IPTTs) | Donor evaluation standards, literature on data verification |
Step 2 | Sensitization of USAID staff | Training materials, participation rosters | Reports on stakeholder engagement |
Step 3 | Central-level verification | Data systems, Performance Plan Report (PPR) submissions | USAID ADS 201 documentation, comparative DQA reports |
Step 4 | Intermediary-level verification | Aggregation tools, submission records | Electronic system reviews |
Step 5 | Primary-level verification | Source documents, disaggregated logs | Global studies on verification logic |
Step 6 | Analysis and dissemination | Workshop summaries, preliminary findings memos | Cross-case MEL learning resources |
Step 7 | Final DQA report | Consolidated DQA report, correction trackers | Program-level data quality benchmarks |
Dimension | Observed Performance | Variation Across Partners | Key Evidence |
---|---|---|---|
Indicator Alignment | Most partners adhered to standardized templates and defined indicators. Some custom indicators lacked clarity and consistent interpretation. | High alignment for economic indicators. Governance and custom indicators showed inconsistencies in definition and documentation. | PIRS, AMELPs, DQA tools |
Source Documentation | Traceability remained uneven. Several submissions lacked full metadata or used partially digitized formats. | Partners with centralized digital systems ensured better documentation. Others relied on incomplete or non-digitized archives. | Submission matrices, verification logs |
Data Consistency | Moderate inconsistencies appeared across reporting cycles, largely due to poor version control or misaligned reporting tools. | Stronger consistency emerged among partners with internal data audits. Others submitted conflicting or outdated values. | IPTTs, quarterly reports, change logs |
Responsiveness to Queries | Most partners responded within deadlines. Response quality ranged from comprehensive with audit trails to fragmented replies lacking supporting details. | Well-prepared partners maintained clear response logs. Less prepared organizations gave vague or incomplete explanations. | Clarification emails, response trackers, technical notes |
Process Adaptation | Some institutions revised MEL workflows and adapted tools to remote verification protocols. Others continued pre-pandemic practices without adjustments. | Adaptive partners updated AMELPs and MEL instruments. Others applied improvised fixes or lacked process revision. | Revised AMELPs, internal communications |
Verification Confidence | Confidence improved when structured documentation supported each indicator. Confidence declined when submissions lacked coherence or key attachments. | Higher confidence in organizations with strong M&E culture. Lower where external technical support was required to clarify submissions. | DQA ratings, partner feedback summaries, MEL dashboards |
Dimension | Observed Performance | Variation Across Partners |
---|---|---|
Data Quality | Performance varied. Strong performers maintained metadata; others missed disaggregations and lacked justifications. | Structured chains can uphold quality if internal practices remain disciplined under remote conditions. |
Institutional Adaptation | Adaptive organizations embedded DQA logic into workflows; others met only the minimum requirements. | The presence of standards holds greater importance than the mere availability of digital tools. |
Stakeholder Engagement | Structured submissions enabled productive engagement; unstructured documentation weakened dialogue. | Clear submission protocols and early preparation improve feedback uptake. |
Adaptive Management Integration | DQA feedback triggered AMELP revisions and system updates in several cases, reflecting internal learning loops. | Iterative remote DQAs can foster adaptive management in fragile settings. |
Remote vs. In-Person Verification | Remote DQAs maintained continuity but could not replicate field-level insights or context-specific validation. | Remote verification is feasible but insufficient on its own; hybrid models offer more robust oversight. |
DQA | Data Quality Assessment |
ADS | Automated Directives System |
AMEL | Activity Monitoring Evaluation and Learning |
IPs | Implementing Partners |
IPTT | Indicator Performance Tracking Table |
MEL | Monitoring, Evaluation, and Learning |
PIRS | Performance Indicator Reference Sheets |
PPR | Performance Plan Report |
Sub-IPs | Sub Implementing Partners |
UNDP | United Nations Development Programme |
USAID | United States Agency for International Development |
[1] | Hilhorst, D., & Mena, R. (2021). When Covid‐19 meets conflict: Politics of the pandemic response in fragile and conflict‐affected states. Disasters, 45(S1), S 126–S 147. |
[2] |
Hur Hassnain, L. K. Simona Somma, eds. 2021. Evaluation in Contexts of Fragility, Conflict and Violence: Guidance from Global Evaluation Practitioners. Exeter, UK: IDEAS.
www.ideas-global.org/wp-content/uploads/2021/06/EvalFCV-Guide-web-A4-HR. pdf. |
[3] | Rodo, M., Singh, L., Russell, N., & Singh, N. S. (2022). A mixed methods study to assess the impact of COVID-19 on maternal, newborn, child health and nutrition in fragile and conflict-affected settings. Conflict and health, 16(1), 30. |
[4] | Kelly, L. M., Goodall, J., & Lombardi, L. (2022). Developing a monitoring and evaluation framework in a humanitarian non-profit organisation using agile methodology. Disaster Prevention and Management: An International Journal, 31(5), 536-549. |
[5] | Ba, A. (2021). How to measure monitoring and evaluation system effectiveness? African Evaluation Journal, 9(1), a 553. |
[6] | Okhmatovskiy, I., & David, R. J. (2012). Setting your own standards: Internal corporate governance codes as a response to institutional pressure. Organization Science, 23(1), 155-176. |
[7] | USAID. (2020). Automated Directives System (ADS) Chapter 201: Operational policy for program cycle. United States Agency for International Development. |
[8] | White, L., Lockett, A., Currie, G., & Hayton, J. (2021). Hybrid context, management practices and organizational performance: A configurational approach. Journal of Management Studies, 58(3), 718-748. |
[9] | Kabonga, I. (2018). Principles and practice of monitoring and evaluation: A paraphernalia for effective development. Africanus: Journal of Development Studies, 48(2), 21-pages. |
[10] | Silva, V., Akkar, S., Baker, J., Bazzurro, P., Castro, J. M., Crowley, H., & Vamvatsikos, D. (2019). Current challenges and future trends in analytical fragility and vulnerability modeling. Earthquake Spectra, 35(4), 1927-1952. |
[11] | Canter, L., & Atkinson, S. F. (2010). Adaptive management with integrated decision making: an emerging tool for cumulative effects management. Impact Assessment and Project Appraisal, 28(4), 287-297. |
[12] | Albrecht, R. (2021). A Framework for Data Quality Management in the Delivery & Consultancy of CRM Platforms (Master's thesis). |
[13] | Stvilia, B., Pang, Y., Lee, D. J., & Gunaydin, F. (2025). Data quality assurance practices in research data repositories—A systematic literature review. An Annual Review of Information Science and Technology (ARIST) paper. Journal of the Association for Information Science and Technology, 76(1), 238-261. |
[14] | Biagi, V., & Russo, A. (2022). Data Model Design to Support Data-Driven IT Governance Implementation. Technologies 2022, 10, 106. |
[15] | Al-Qadi, M. M. M. (2023). Advancing water resources management in arid regions through stakeholder engagement, digitalization, and policy integration: Jordan as a case study (Doctoral dissertation, Technische Universität München). |
[16] | Cai, L., & Zhu, Y. (2015). The challenges of data quality and data quality assessment in the big data era. Data science journal, 14, 2-2.–10. |
[17] | Carment, D., Muñoz, K., & Samy, Y. (2020). Fragile and conflict-affected states in the age of COVID 19. |
[18] | Ibrahim, A. M., Gusau, A. L., & Uba, S. (2022). Proposing Internet-driven alternative pedagogical system for use in teaching and learning during and beyond the COVID-19 pandemic. International Journal of Media and Information Literacy, 7(1), 118-131. |
[19] | Batini, C., Cappiello, C., Francalanci, C., & Maurino, A. (2009). Methodologies for data quality assessment and improvement. ACM computing surveys (CSUR), 41(3), 1-52. |
[20] | Dutra, L. X., Ellis, N., Perez, P., Dichmont, C. M., De La Mare, W., & Boschetti, F. (2014). Drivers influencing adaptive management: a retrospective evaluation of water quality decisions in South East Queensland (Australia). Ambio, 43, 1069-1081. |
[21] | Woodall, P., Borek, A., & Parlikad, A. K. (2013). Data quality assessment: the hybrid approach. Information & management, 50(7), 369-382. |
[22] | Kim, S., Pérez-Castillo, R., Caballero, I., & Lee, D. (2022). Organizational process maturity model for IoT data quality management. Journal of Industrial Information Integration, 26, 100256. |
[23] | Saleh, F. I. M., & Karia, N. (2024). Value-driven Management for International Development and Aid Projects. Springer. |
[24] | Yin, R. K. (2018). Case study research and applications: Design and methods. SAGE Publications. |
[25] | Crowe, M., Inder, M., & Porter, R. (2015). Conducting qualitative research in mental health: Thematic and content analyses. Australian & New Zealand Journal of Psychiatry, 49(7), 616-623. |
[26] | Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. Handbook of qualitative research, 2(163-194), 105. |
[27] | Silverman, D. (2013). Doing qualitative research: A practical handbook. SAGE Publications. |
[28] | Armstrong, C. (2021). Key methods used in qualitative document analysis. OSF Preprints, 1(9). |
[29] | Morgan, H. (2022). Conducting a qualitative document analysis. The qualitative report, 27(1), 64-77. |
[30] | Wood, L. M., Sebar, B., & Vecchio, N. (2020). Application of rigour and credibility in qualitative document analysis: Lessons learnt from a case study. The qualitative report, 25(2), 456-470. |
[31] | Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative research journal, 9(2), 27-40. |
[32] | Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & health sciences, 15(3), 398-405. |
[33] | Pianese, T., Errichiello, L., & da Cunha, J. V. (2023). Organizational control in the context of remote working: A synthesis of empirical findings and a research agenda. European Management Review, 20(2), 326-345. |
[34] | Cho, J. Y., & Lee, E. H. (2014). Reducing confusion about grounded theory and qualitative content analysis: Similarities and differences. Qualitative report, 19(32). |
[35] | United Nations Development Programme (UNDP). (2021). Guidelines for remote monitoring in fragile contexts. Available at: |
[36] | Neuendorf, K. A. (2018). Content analysis and thematic analysis. In Advanced research methods for applied psychology (pp. 211-223). Routledge. |
[37] | O’Leary, Z. (2004). The essential guide to doing research. SAGE Publications. |
[38] | Gregar, J. (2023). Research design (qualitative, quantitative and mixed methods approaches). Research Design, 8. |
[39] | Matsiliza, N. S. (2019). Strategies to improve capacity for policy monitoring and evaluation in the public sector. Journal of Reviews on Global Economics, 8, 490-499. |
[40] | Riemenschneider, N., McConnell, J. & Shejavali, K. (2021). Assessing and enhancing government data quality, from theory to practice. Oxford Policy Management. |
[41] | Hernandez, K., Ramalingam, B., & Wild, L. (2019). Towards evidence-informed adaptive management. ODI Working Paper 565. London: ODI. |
[42] | Wilkin, C. L., Campbell, J., Moore, S., & Simpson, J. (2018). Creating value in online communities through governance and stakeholder engagement. International Journal of Accounting Information Systems, 30, 56-68. |
[43] | Quinn, N. W., Sridharan, V., Ramirez-Avila, J., Imen, S., Gao, H., Talchabhadel, R., & McDonald, W. (2022). Applications of GIS and remote sensing in public participation and stakeholder engagement for watershed management. |
[44] | Hilty, D. M., Armstrong, C. M., Luxton, D. D., Gentry, M. T., & Krupinski, E. A. (2021). A scoping review of sensors, wearables, and remote monitoring for behavioral health: uses, outcomes, clinical competencies, and research directions. Journal of Technology in Behavioral Science, 6(2), 278-313. |
[45] | Tran, N. Q., Carden, L. L., & Zhang, J. Z. (2022). Work from anywhere: remote stakeholder management and engagement. Personnel Review, 51(8), 2021-2038. |
[46] | Larson, S., Measham, T. G., & Williams, L. J. (2010). Remotely engaged? Towards a framework for monitoring the success of stakeholder engagement in remote regions. Journal of environmental planning and management, 53(7), 827-845. |
[47] | Barclay, I. (2022). Providing verifiable oversight for scrutability, assurance, and accountability in data-driven systems. Cardiff University. Available at: |
[48] | Price, R. (2017). Approaches to remote monitoring in fragile states. Governance and Social. |
[49] | Madon, S., Reinhard, N., Roode, D., & Walsham, G. (2009). Digital inclusion projects in developing countries: Processes of institutionalization. Information technology for development, 15(2), 95-107. |
[50] | Scarlett, L. (2013). Collaborative adaptive management: challenges and opportunities. Ecology and Society, 18(3). |
[51] | Bastola, M., Locatis, C., & Fontelo, P. (2021). Diagnostic reliability of in-person versus remote dermatology: a meta-analysis. Telemedicine and e-Health, 27(3), 247-250. |
[52] | Basha, S. A., Cai, Q., Lee, S., Tran, T., Majerle, A., Tiede, S., & Gewirtz, A. H. (2024). Does Being In-Person Matter? Demonstrating the Feasibility and Reliability of Fully Remote Observational Data Collection. Prevention Science, 1-12. |
[53] | Herrera, Y. M., & Kapur, D. (2007). Improving data quality: Actors, incentives, and capabilities. Political Analysis, 15(4), 365-386. |
[54] | Prieto-Martin, P., Apgar, M., & Hernandez, K. (2020). Adaptive management in SDC: Challenges and opportunities. Institute of Development Studies. |
[55] | Aceves-Bueno, E., Adeleye, A. S., Bradley, D., Tyler Brandt, W., Callery, P., Feraud, M., & Tague, C. (2015). Citizen science as an approach for overcoming insufficient monitoring and inadequate stakeholder buy-in in adaptive management: criteria and evidence. Ecosystems, 18, 493-506. |
[56] | Kagoya, H. R., & Kibuule, D. (2018). Quality assurance of health management information system in Kayunga district, Uganda. African Evaluation Journal, 6(2), 1-11. |
[57] | Roberts, J., Onuegbu, C., Harris, B., Clark, C., Griffiths, F., Seers, K., & Boardman, F. (2025). Comparing In-Person and Remote Qualitative Data Collection Methods for Data Quality and Inclusion: A Scoping Review. International Journal of Qualitative Methods, 24, 16094069251316745. |
[58] | Greenhalgh, T., Rosen, R., Shaw, S. E., Byng, R., Faulkner, S., Finlay, T., & Wood, G. W. (2021). Planning and evaluating remote consultation services: a new conceptual framework incorporating complexity and practical ethics. Frontiers in digital health, 3, 726095. |
[59] | Puttkammer, N., Baseman, J. G., Devine, E. B., Valles, J. S., Hyppolite, N., Garilus, F., & Barnhart, S. (2016). An assessment of data quality in a multi-site electronic medical record system in Haiti. International journal of medical informatics, 86, 104-116. |
[60] | Weiskopf, N. G., Bakken, S., Hripcsak, G., & Weng, C. (2017). A data quality assessment guideline for electronic health record data reuse. Egems, 5(1), 14. |
[61] | Zuniga-Teran, A. A., Fisher, L. A., Meixner, T., Le Tourneau, F. M., & Postillion, F. (2022). Stakeholder participation, indicators, assessment, and decision-making: applying adaptive management at the watershed scale. Environmental Monitoring and Assessment, 194(3), 156. |
APA Style
Ba, A., Muga, T., Okwarah, P., Ali, M. (2025). Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19. Social Sciences, 14(4), 315-331. https://doi.org/10.11648/j.ss.20251404.13
ACS Style
Ba, A.; Muga, T.; Okwarah, P.; Ali, M. Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19. Soc. Sci. 2025, 14(4), 315-331. doi: 10.11648/j.ss.20251404.13
@article{10.11648/j.ss.20251404.13, author = {Abdourahmane Ba and Tom Muga and Patrick Okwarah and Mohamed Ali}, title = {Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19 }, journal = {Social Sciences}, volume = {14}, number = {4}, pages = {315-331}, doi = {10.11648/j.ss.20251404.13}, url = {https://doi.org/10.11648/j.ss.20251404.13}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ss.20251404.13}, abstract = {Organizations operating under fragile contexts often struggle to uphold data quality standards due to insecurity, institutional fragmentation, and limited field access. The COVID-19 pandemic intensified these constraints. It suspended the possibility of direct verification and posed critical questions about the integrity of performance oversight. This research investigates whether remote Data Quality Assessments (DQAs) preserved accountability and verification rigor during this period of operational stress. The research adopts a qualitative case study design and analyzes the remote DQA model implemented across the USAID Somalia portfolio in 2020. The analysis relies on reporting documents, standardized templates, verification protocols, and technical feedback archives to evaluate performance across five data quality dimensions and examine the remote DQA process. It references peer-reviewed studies, donor publications, and evaluation reports from Somalia and similar fragile settings to support contextual interpretation and enable cross-case insight. The research applies thematic content analysis and triangulated document review to assess institutional behavior and the resilience of monitoring systems under constraint. The findings confirm that remote DQAs enabled continuity of oversight and preserved structured verification logic. However, performance in institutional adaptation varied. The research reveals that remote models depend heavily on partner capacity and documentation clarity. Coordination between implementing partners and sub-implementing partners emerged as a strategic determinant of remote verification success. While remote DQAs allowed accountability in non-permissive settings, they could not replicate the contextual depth and diagnostic precision of field-based assessments. The absence of observational evidence hindered the detection of informal practices and constrained verification confidence. The research concludes that remote verification models offer a viable response to operational disruption, but they cannot substitute for the comprehensiveness of hybrid approaches. Hybrid models that combine remote reviews with targeted field visits, once embedded within institutional frameworks, offer a strategic path to reinforce system resilience in fragile and constrained settings. Somalia’s experience highlights the need for donors and implementing partners to institutionalize adaptive oversight mechanisms capable of maintaining data quality under fragility and stress. }, year = {2025} }
TY - JOUR T1 - Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19 AU - Abdourahmane Ba AU - Tom Muga AU - Patrick Okwarah AU - Mohamed Ali Y1 - 2025/06/30 PY - 2025 N1 - https://doi.org/10.11648/j.ss.20251404.13 DO - 10.11648/j.ss.20251404.13 T2 - Social Sciences JF - Social Sciences JO - Social Sciences SP - 315 EP - 331 PB - Science Publishing Group SN - 2326-988X UR - https://doi.org/10.11648/j.ss.20251404.13 AB - Organizations operating under fragile contexts often struggle to uphold data quality standards due to insecurity, institutional fragmentation, and limited field access. The COVID-19 pandemic intensified these constraints. It suspended the possibility of direct verification and posed critical questions about the integrity of performance oversight. This research investigates whether remote Data Quality Assessments (DQAs) preserved accountability and verification rigor during this period of operational stress. The research adopts a qualitative case study design and analyzes the remote DQA model implemented across the USAID Somalia portfolio in 2020. The analysis relies on reporting documents, standardized templates, verification protocols, and technical feedback archives to evaluate performance across five data quality dimensions and examine the remote DQA process. It references peer-reviewed studies, donor publications, and evaluation reports from Somalia and similar fragile settings to support contextual interpretation and enable cross-case insight. The research applies thematic content analysis and triangulated document review to assess institutional behavior and the resilience of monitoring systems under constraint. The findings confirm that remote DQAs enabled continuity of oversight and preserved structured verification logic. However, performance in institutional adaptation varied. The research reveals that remote models depend heavily on partner capacity and documentation clarity. Coordination between implementing partners and sub-implementing partners emerged as a strategic determinant of remote verification success. While remote DQAs allowed accountability in non-permissive settings, they could not replicate the contextual depth and diagnostic precision of field-based assessments. The absence of observational evidence hindered the detection of informal practices and constrained verification confidence. The research concludes that remote verification models offer a viable response to operational disruption, but they cannot substitute for the comprehensiveness of hybrid approaches. Hybrid models that combine remote reviews with targeted field visits, once embedded within institutional frameworks, offer a strategic path to reinforce system resilience in fragile and constrained settings. Somalia’s experience highlights the need for donors and implementing partners to institutionalize adaptive oversight mechanisms capable of maintaining data quality under fragility and stress. VL - 14 IS - 4 ER -