Research Article | | Peer-Reviewed

Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19

Received: 21 May 2025     Accepted: 7 June 2025     Published: 30 June 2025
Views:       Downloads:
Abstract

Organizations operating under fragile contexts often struggle to uphold data quality standards due to insecurity, institutional fragmentation, and limited field access. The COVID-19 pandemic intensified these constraints. It suspended the possibility of direct verification and posed critical questions about the integrity of performance oversight. This research investigates whether remote Data Quality Assessments (DQAs) preserved accountability and verification rigor during this period of operational stress. The research adopts a qualitative case study design and analyzes the remote DQA model implemented across the USAID Somalia portfolio in 2020. The analysis relies on reporting documents, standardized templates, verification protocols, and technical feedback archives to evaluate performance across five data quality dimensions and examine the remote DQA process. It references peer-reviewed studies, donor publications, and evaluation reports from Somalia and similar fragile settings to support contextual interpretation and enable cross-case insight. The research applies thematic content analysis and triangulated document review to assess institutional behavior and the resilience of monitoring systems under constraint. The findings confirm that remote DQAs enabled continuity of oversight and preserved structured verification logic. However, performance in institutional adaptation varied. The research reveals that remote models depend heavily on partner capacity and documentation clarity. Coordination between implementing partners and sub-implementing partners emerged as a strategic determinant of remote verification success. While remote DQAs allowed accountability in non-permissive settings, they could not replicate the contextual depth and diagnostic precision of field-based assessments. The absence of observational evidence hindered the detection of informal practices and constrained verification confidence. The research concludes that remote verification models offer a viable response to operational disruption, but they cannot substitute for the comprehensiveness of hybrid approaches. Hybrid models that combine remote reviews with targeted field visits, once embedded within institutional frameworks, offer a strategic path to reinforce system resilience in fragile and constrained settings. Somalia’s experience highlights the need for donors and implementing partners to institutionalize adaptive oversight mechanisms capable of maintaining data quality under fragility and stress.

Published in Social Sciences (Volume 14, Issue 4)
DOI 10.11648/j.ss.20251404.13
Page(s) 315-331
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Remote Data Quality Assessment, Fragile Contexts, Monitoring, Evaluation, and Learning, Adaptive Management, Stakeholder Engagement

1. Introduction
1.1. Significance of the Research
Development organizations depend on credible data to ensure program accountability, transparency, and responsiveness. In fragile settings, this imperative becomes more critical and more difficult to achieve. This research contributes to the global conversation on sustaining data oversight when traditional mechanisms fail. The investigation derives its relevance from the emphasis on institutional practices rather than technical routines. It introduces new perspectives for decision-makers who aim to uphold verification standards amid fragility and operational constraints.
Donors and Implementing Partners (IPs) face persistent dilemmas when external shocks disrupt standard monitoring. The COVID-19 pandemic challenged verification models across regions, and Somalia offers a telling example. The country’s security environment and mobility restrictions during the period 2020 to 2021 required the United States Agency for International Development (USAID) to transition from in-person Data Quality Assessment (DQA). USAID moved to a fully remote model. This shift imposed new demands on internal systems and accountability mechanisms. It also intro-duced knowledge gaps in understanding how verification can occur without physical access. Hilhorst and Mena empha-size this concern in their analysis of governance during crises .
The research emphasized that institutional behavior, not technological substitution, determined the capacity to uphold Monitoring, Evaluation, and Learning (MEL) standards. This approach addresses concerns Hur Hassnain and Simona Somma mentioned, who argue that institutional coherence, not just tools, defines MEL system performance in fragile contexts . USAID Somalia’s case provides a structured opportunity to examine these dynamics and inform policy dialogue across similarly constrained development environments.
Program staff, technical advisers, and policymakers often call for scalable monitoring systems that function across settings. The relevance of this research lies in its capacity to extract lessons from real-world implementation. It moves beyond abstract policy guidance. Remote data collection, virtual engagement, and structured templates have become common strategies, yet there remains limited empirical evidence on how these elements interact under duress. This research offers decision-relevant insights into the operational conditions, capacities, and institutional logics that shape MEL effectiveness.
In particular, donor institutions increasingly seek models that reinforce accountability without increasing risk. As digital tools become embedded in MEL strategies, the research underscores the need to understand how these tools perform when institutions must rely on internal verification logic. The evidence generated supports broader discussions on adaptive MEL systems, public sector resilience, and remote governance mechanisms. Rodo et al. demonstrated similar findings in health and nutrition programs, where organizational adaptation ensured monitoring efficacy under pressure .
The research serves as a resource for stakeholders engaged in reforming data assurance in fragile states. The focus on institutional adaptation, rather than procedural substitution, aligns with global priorities for more context-sensitive and sustainable oversight models. Kelly et al. emphasized the need for agile protocols in MEL under disruptions . USAID Somalia’s case illustrates that capacity to interpret and apply verification logic can stabilize accountability even under mobility constraints. As Ba argued that the effectiveness of MEL systems depends on their resilience, responsiveness, consistency, and clarity in attributing performance . Okhmatovskiy and David highlighted that in high-risk environments, success depends less on external evaluation and more on internal systems that align incentives, guidance, and compliance .
1.2. Scientific Value
MEL systems serve as strategic tools for performance accountability in development programming. In fragile settings, this function becomes essential as institutions operate under limited visibility and intermittent access. This research examines how internal verification mechanisms responded to operational constraints. It also assesses whether standardized procedures were sufficient to maintain data quality without physical deployment. The USAID Somalia 2020 DQA case offers empirical insight into institutional adaptation under systemic stress.
In 2020, USAID Somalia’s shift to remote DQAs required IPs to align internal workflows with pre-defined verification logic. The Mission upheld the quality standards outlined in the Automated Directives System (ADS 201), USAID’s operational policy for the program cycle. The standards comprise validity, reliability, integrity, precision, and timeliness (Table 1). Responsibility for enforcement shifted to institutional processes. Field visits gave way to structured virtual sessions, template-based documentation, and systematic responses to technical queries. Organizational capacity became the primary determinant of verification success.
Table 1. USAID Data Quality Standards (Automated Directives System -ADS- 201, Version 2020).

Standard

Definition

Validity

Data must clearly and adequately represent the intended result, ensuring that what is measured aligns directly with the stated indicator or outcome.

Integrity

Safeguards must be in place to minimize risks of bias, transcription errors, or intentional manipulation of the data.

Precision

Data must possess sufficient detail to support sound and informed management decisions, avoiding both overgeneralization and excessive granularity.

Reliability

Data collection and analysis processes must remain stable and consistent over time to ensure comparability and reproducibility.

Timeliness

Data must be available at a frequency and currency that allows it to influence timely and effective decision-making processes.

Some partners internalized DQA requirements, applied audit trails, and produced traceable documentation. Others failed to meet indicator logic standards despite using identical formats. These contrasts affirm White et al.’s assertion that institutional asymmetry shapes MEL performance in fragile contexts . As Kabonga explained, the effectiveness of verification mechanisms depends on documentation logic and procedural discipline rather than on digital tools .
Silva et al. argued that fragility compounds institutional strain . Yet the Somalia case revealed instances of adaptive reinforcement. Several organizations streamlined internal review cycles, linked source data to indicators, and shifted from reactive to anticipatory responses. Hur Hassnain and Simono Somma associated such transformations with MEL system resilience . These observations show structured adaptation under duress, not improvisation.
Learning processes embedded within verification cycles shaped the operational trajectory. Canter and Atkinson defined adaptive systems as those capable of incorporating feedback through deliberate reflection . USAID’s iterative remote DQA structure allowed partners to integrate corrections. They refined templates and aligned with indicator expectations over time. Ba described MEL effectiveness as a function of clarity, role coherence, and iterative engagement —each of which emerged in Somalia through structured exchanges rather than isolated audits.
The use of Customer Relationship Management (CRM)-like templates and structured documentation flows, as emphasized by Albrecht, proved effective in enhancing internal verification logic . Partners who maintained version control, document references, and indicator alignment delivered higher quality assurance. Stvilia et al. (2021) stressed that digital oversight only functions where institutions preserve procedural traceability and safeguard data from distortion . These conditions held across Somalia’s most effective DQA implementations.
The research also aligns with recent recommendations to design hybrid DQA systems that integrate remote and in-person components. Although the Somalia case remained exclusively remote due to the pandemic, it identified several principles critical for hybrid models. Key elements include pre-engagement preparation, standardized indicator definitions, secure communication infrastructure, and anticipatory guidance. Although often treated as theoretical, the USAID Somalia experience translated them into operational practice.
Cai and Zhu emphasized accuracy and completeness as essential to data assurance in digital environments . In Somalia, remote DQA processes succeeded through partner adherence to three conditions: alignment of reported data with source documentation, precise application of indicator attribution, and consistent data aggregation. These verification patterns provided technical confidence in the absence of site visits.
Findings from Carment et al. (2022) show MEL system deterioration across other Fragile and Conflict-Affected States (FCAS) . Somalia diverged from that pattern. Most of the partners treated remote sessions not as a substitute but as an accountability mechanism. Follow-up reports improved in many cases. Documentation became more structured. Ebrahim et al. highlighted MEL system fragility under COVID-19 . Somalia’s DQA response reflected procedural alignment and mission-level coordination. It demonstrated institutional steadiness despite external volatility.
The scientific value of this research lies in the assessment of procedural responses, the analysis of institutional variation, and the alignment of observed practices with resilience theory. The research moves beyond assumptions to examine actual behavior under operational pressure. It clarifies how MEL systems maintain verification integrity when conventional oversight mechanisms do not function.
1.3. Research Aim and Questions
This research investigates the institutional mechanisms, procedural safeguards, and technical routines that enabled USAID Somalia to sustain performance verification when conventional oversight collapsed. It focuses on how remote DQA protocols operated under severe constraints and evaluates whether structured internal systems can uphold data accountability in fragile environments. Rather than appraising the protocol alone, the research examines the conditions under which remote approaches delivered credible results and supported institutional learning.
This investigation fills conceptual and operational gaps in the MEL literature through empirical analysis of how verification standards remain effective under constrained conditions. It also examines whether IPs adapted successfully. It identifies lessons that can inform the development of future verification systems. The findings aim to inform the design of more resilient oversight models that maintain performance logic even under conditions of fragility and resource scarcity.
The research is guided by the following five questions:
(1) To what extent did remote DQA processes uphold data quality standards in the absence of field verification?
(2) How did IPs adjust internal workflows to meet performance verification requirements?
(3) What procedural and organizational factors enabled or constrained MEL effectiveness under remote conditions?
(4) What evidence of learning and system adaptation emerged during repeated remote DQA sessions?
(5) How applicable is the Somalia remote DQA model to other fragile contexts?
These questions frame the research around institutional behavior, verification discipline, and adaptive learning. The goal is not only to assess a single case. It also seeks to draw broader conclusions on how data accountability can persist when conventional systems fail. The research contributes to ongoing debates about the future of MEL in high-risk and fragile settings during periods of operational shocks.
2. Literature Review
Conceptual Framework
Figure 1. Conceptual framework: data quality, MEL systems, and adaptive management in fragile contexts – Authors.
This research adopts a multi-dimensional framework that integrates institutional theory, adaptive management, and MEL system resilience to explain how accountability mechanisms remain functional in the absence of field-based oversight (Figure 1). In fragile settings, performance monitoring often depends less on physical verification and more on procedural integrity, internal coherence, and structured engagement. These elements underpin a resilient MEL architecture that can operate under constraints.
Institutional theory emphasizes the role of rule-based routines and standardized roles in ensuring consistency and reliability. According to Cai and Zhu, data governance frameworks gain robustness when institutional logic supports traceability, structured workflows, and transparent aggregation protocols . In the context of remote DQA, this perspective highlights the extent to which organizational systems, not individual actions, sustain performance oversight. The research evaluates how USAID Somalia’s internal mechanisms substituted for traditional supervisory modalities. It assesses whether they maintained compliance with indicator definitions and reporting standards.
Procedural alignment across IPs served as a key dimension. Batini et al. identified documentation logic and source consistency as prerequisites for meaningful verification . The research framework applies the concept to assess whether partner organizations in Somalia structured performance data around consistent attribution. It examines version control and indicator alignment as core elements. Together, they establish the operational baseline for assessing data validity, integrity, precision, reliability, and timeliness, which are fundamental criteria under the USAID ADS 201 standards.
Adaptive management provides a second pillar for the conceptual framework. Rather than viewing oversight as a static process, adaptive systems evolve through feedback cycles, iterative correction, and reflection on performance gaps. Dutra et al. underscored that feedback mechanisms enhance system responsiveness and allow institutions to integrate learning over time . In Somalia, remote DQAs acted as embedded checkpoints that allowed partners to adjust their documentation protocols. They also aligned internal monitoring systems with programmatic expectations. This adaptive logic helped create continuity in oversight despite mobility restrictions.
Woodall et al. further affirmed that hybrid data quality frameworks, which combine manual review with standardized scoring protocols, are more likely to ensure consistency across time and settings . The framework applied in this research tests that proposition. It analyzes how USAID partners in Somalia institutionalized DQA templates, feedback forms, and reporting matrices. It treats these tools not as static instruments, but as dynamic components of a learning ecosystem.
Stakeholder engagement theory also contributes to the framework. Al-Qadi demonstrated that participatory verification models support compliance and foster shared accountability, particularly in digitally constrained environments . Within this research, virtual DQA sessions served as arenas for technical exchange, evidence submission, and clarification of indicator logic. This interaction ensured that performance dialogue occurred within a structured setting, aligned with predefined roles and supported by verifiable documentation.
From a systems design perspective, Kim et al. proposed that organizational maturity influences the reliability of Internet of Things (IoT) data workflows . Although not based in the development sector, their findings apply to remote MEL environments. In these contexts, system maturity determines whether tools yield credible outputs. The framework extends this logic to assess whether USAID’s IPs demonstrated maturity through procedural discipline and responsiveness during remote engagements.
Finally, the research framework incorporates performance resilience as a cross-cutting criterion. Saleh and Karia advanced a definition of MEL maturity that includes institutional coordination, accountability infrastructure, and the application of feedback . These elements are used in this research to interpret the variation in DQA outcomes across USAID Somalia partners. Rather than attribute success to digital platforms, the analysis centers institutional behavior. It emphasizes internal verification capacity and role clarity as the explanatory variables.
3. Materials and Methods
3.1. Research Design
This research applies a qualitative case study design to examine how remote DQA processes operate under operational constraints. The case study framework enables detailed and context-sensitive analysis of institutional responses to remote oversight in fragile environments. Yin confirms that case studies offer a rigorous means to investigate real-life phenomena within bounded systems, particularly when contextual and procedural boundaries appear blurred . This design aligns with the research objective, which aims to produce empirical evidence on institutional behavior and adaptive performance verification in Somalia during the COVID-19 crisis.
The research adopts an interpretivist stance and posits that institutional adaptation and accountability result from embedded organizational routines shaped through local contexts, rather than through the application of standardized technical tools alone . This perspective reflects the need to treat verification not as a procedural formality, but as an evolving system influenced by technical arrangements, interpersonal relations, and structural constraints . Al-Qadi stresses that research in fragile settings must address complex, decentralized decision-making dynamics . This case study design offers the appropriate lens. It traces accountability mechanisms through multiple layers of institutional engagement and constraint.
The analytical plan corresponds directly to the research questions. Institutional review and process tracing address documentation logic and internal system coherence . Stakeholder interactions reveal how learning occurs across remote DQA sessions. Document-based changes provide insight into the procedural adaptation of IPs. Armstrong supports the use of document analysis for inquiries that involve standardized templates and iterative reporting structures . Morgan and Wood, along with Sebar et al., further confirm the validity of document-based approaches. Such methods are appropriate for assessing institutional behavior in contexts with limited field access .
Aligned with Ba, who defined MEL system effectiveness through embedded feedback and adaptation mechanisms, this design emphasizes explanatory depth over procedural measurement . The aim is to understand how USAID Somalia’s remote verification model functioned under limited access. It also seeks to identify institutional variation in adaptation and draw broader lessons for MEL resilience. The methodology integrates retrospective documentation, verification reports, and technical memos to generate structural and behavioral insights.
Strategic triangulation strengthens the analytical framework. The research integrates program reports, Activity Monitoring Evaluation and Learning (AMEL) Plans, verification matrices, and DQA templates to ensure consistency and reinforce validation . This multi-source design reflects the recommendation from Pianese et al. for robust empirical research. It clarifies how remote accountability can function without physical presence . Cho and Lee also affirm the importance of linking thematic and pattern-based approaches to increase analytical coherence in case-based research .
To ensure full alignment with the research questions, Table 2 shows the mapping of each question to its corresponding analytical lens. This structured approach enhances transparency. It enables focused inquiry into the interplay between institutional behavior, adaptive processes, and the operational logic of remote MEL systems.
Table 2. Research design matrix.

Research questions

Data sources

Methods and analytical focus

1. To what extent did remote DQA processes uphold data quality standards in the absence of field verification?

DQA reports, indicator reference sheets, documentation templates

Qualitative case synthesis and structured analysis of DQA protocols, including benchmarking reported data against USAID ADS 201 standards (validity, reliability, precision, timeliness, integrity), with structured document coding.

2. How did IPs adjust internal workflows to meet performance verification requirements?

Partner submissions, communications, and technical notes

Thematic analysis of internal partner records and documentation systems, focused on adaptations to workflows and alignment with data quality logic, process flow mapping and content coding.

3. What procedural and organizational factors enabled or constrained MEL effectiveness under remote conditions?

Internal Standard Operating Procedures (SOPs), technical memos, verification summaries

Pattern tracing and indicator quality standard-linked document review, comparative matrix analysis of enabling and constraining variables, and triangulation across cases.

4. What evidence of learning and system adaptation emerged during repeated remote DQA cycles?

Sequential DQA documentation, internal feedback, capacity-building records

Content analysis of technical memos and response matrices, applied through process tracing across cycles to capture documentation improvement and institutional learning.

5. How applicable is the Somalia remote DQA model to other fragile contexts?

Donor strategy documents, global MEL reports, USAID/UN/World Bank publications

Comparative synthesis and strategic distillation using secondary literature, designed to assess generalizability, contextual coherence, and operational relevance in similar fragile environments.

3.2. Data Collection
This research draws on a structured sequence of primary and secondary data sources aligned with the USAID Somalia remote DQA protocol. The approach strategically integrates programmatic documentation, virtual engagements, and institutional records to trace verification logic, data flow, and adaptive behavior across IPs. Each step of the remote DQA process—outlined in Table 3—corresponds to a distinct layer of empirical material.
Table 3. Steps of the remote DQA process and corresponding data sources.

Step

Description

Primary Data Sources

Secondary Data Sources

Step 0

Preparation

Notification letters, indicator selection documents

MEL policy briefs, remote monitoring guidelines

Step 1

Desk review and tool finalization

Activity Monitoring, Evaluation, and Learning (AMEL) Plans, DQA tools, Indicator Performance Tracking Table (IPTTs)

Donor evaluation standards, literature on data verification

Step 2

Sensitization of USAID staff

Training materials, participation rosters

Reports on stakeholder engagement

Step 3

Central-level verification

Data systems, Performance Plan Report (PPR) submissions

USAID ADS 201 documentation, comparative DQA reports

Step 4

Intermediary-level verification

Aggregation tools, submission records

Electronic system reviews

Step 5

Primary-level verification

Source documents, disaggregated logs

Global studies on verification logic

Step 6

Analysis and dissemination

Workshop summaries, preliminary findings memos

Cross-case MEL learning resources

Step 7

Final DQA report

Consolidated DQA report, correction trackers

Program-level data quality benchmarks

Primary data include documents IPs submitted through the DQA workflow: Performance Indicator Reference Sheets (PIRS), performance reports, structured templates, and technical clarification exchanges. These records allow the investigation to trace how performance indicators were interpreted, how evidence was organized, and how discrepancies were resolved across the verification tiers. Consistent with Bowen, qualitative document analysis supports the exploration of procedural coherence. It also reveals internal accountability in the absence of physical supervision .
Structured engagement summaries from remote sensitization sessions, partner feedback workshops, and correspondences provide additional layers of interpretive data. These documents clarify how verification standards were internalized and how institutional routines evolved in response. As Armstrong noted, triangulated document-based analysis remains essential when field access is constrained. .
Secondary sources expanded the evidence base. The research included peer-reviewed studies, donor publications, and evaluation reports relevant to Somalia and other fragile contexts. The selection process targeted documents published between 2018 and 2020. This timeframe ensures alignment with recent experiences of remote MEL adaptation under operational stress. The research appraised these sources by assessing methodological rigor, contextual relevance, and analytical consistency. This process allowed the identification of documents that provided empirical grounding. It also enabled comparative insight into verification challenges and MEL system performance in fragile environments. Key references such as United Nations Development Programme (UNDP) contributed to cross-case validation . They enriched the contextual interpretation of the Somalia experience within global accountability debates.
3.3. Data Analysis
The data analysis approach adopted in this research supports the objective of evaluating the resilience and effectiveness of MEL systems under conditions of operational constraint. The strategy integrates thematic coding, case-based synthesis, and structured benchmarking to align findings with the overarching research questions and conceptual framework. The analysis draws from both USAID ADS 201 data quality dimensions and theoretical constructs on institutional adaptation, learning, and procedural coherence .
Primary data, including remote DQA protocols, partner documentation, technical feedback memos, and indicator-linked verification matrices, formed the basis for a qualitative case study synthesis. Benchmarking techniques assessed data against core ADS 201 standards: validity, reliability, precision, timeliness, and integrity . This ensured consistency with global monitoring norms. It also adapted analytical logic to Somalia’s remote oversight environment.
Thematic analysis was applied to partners’ internal workflows and documentation to identify patterns of institutional adaptation. This included mapping submission cycles and feedback loops. It also examined procedural alignment between internal systems and verification logic. Structured content from templates and DQA response matrices was coded to trace process improvement across DQA sessions and to evaluate institutional learning over time .
To understand operational enablers and constraints, pattern-tracing techniques were combined with matrix displays. This facilitated comparison across partners and helped identify organizational drivers such as review mechanisms, understanding of indicator definitions, and responsiveness to DQA queries. Triangulation across documentation, internal notes, and secondary literature further validated findings .
The analytical protocol reflected established guidance on credible qualitative document analysis. It applied clear coding frameworks and incorporated reflexive validation to ensure interpretive rigor . Structured triangulation enhanced analytical robustness and responded to the limitations often associated with document-only research. These layers of interpretation enabled institutional behaviors to emerge from evidence patterns rather than from anecdotal narratives.
The final synthesis generated contextual inference to examine how remote verification aligned with institutional resilience. Findings were positioned within wider debates on adaptive MEL systems and procedural accountability in fragile settings. The research advances methodological development through validation of documentation-based approaches to assess MEL functionality and oversight logic under constrained conditions.
3.4. Ethical Considerations
This research complied fully with USAID’s data security and confidentiality standards. All program documents and records remained protected through encryption, secure storage, and restricted access. These measures safeguarded sensitive information. The research relied exclusively on retrospective analysis of existing documentation. It did not involve human subjects, direct interviews, or primary data collection. Consequently, no institutional ethics review board (IRB) approval was required. The methodology conformed to accepted standards for documentary and policy research and maintained a low ethical risk profile throughout the process.
3.5. Limitations
This research draws exclusively on document-based evidence and does not include direct stakeholder interviews or field-level interaction. While the analysis draws from diverse data types, this approach limits the depth of contextual insight that first-hand accounts could provide. As Yin, triangulation across methods enhances credibility in case study research . The absence of interview data constrains the ability to explore perceptions, informal practices, and operational nuances that shaped the implementation of remote DQA.
Document-based analysis also entails risks of partial reporting and institutional bias. As Bowen and O’Leary caution, documents reflect specific institutional purposes and may omit dissenting views, undocumented practices, or operational irregularities . Official reports often highlight procedural compliance and understate the challenges encountered during implementation. These limitations could skew the analysis toward formalized success narratives and reduce visibility into practical constraints faced by field staff and partners.
Despite these constraints, the research applies structured triangulation and rigorous comparative analysis across document sets. Gregar and Matsiliza affirm that such approaches, when systematically applied, can yield credible insights . Although missing stakeholder voices narrows interpretive depth, the research design compensates with methodical rigor and relevance to oversight systems in fragile contexts.
4. Results
The analysis draws from verified performance dimensions and triangulated documentation to reveal patterns in partner compliance, institutional behavior, and the robustness of the remote verification logic (Table 4). The insights align with the broader research aim of evaluating institutional mechanisms that sustain data quality in fragile settings .
Table 4. Remote DQA Results Summary Matrix.

Dimension

Observed Performance

Variation Across Partners

Key Evidence

Indicator Alignment

Most partners adhered to standardized templates and defined indicators. Some custom indicators lacked clarity and consistent interpretation.

High alignment for economic indicators. Governance and custom indicators showed inconsistencies in definition and documentation.

PIRS, AMELPs, DQA tools

Source Documentation

Traceability remained uneven. Several submissions lacked full metadata or used partially digitized formats.

Partners with centralized digital systems ensured better documentation. Others relied on incomplete or non-digitized archives.

Submission matrices, verification logs

Data Consistency

Moderate inconsistencies appeared across reporting cycles, largely due to poor version control or misaligned reporting tools.

Stronger consistency emerged among partners with internal data audits. Others submitted conflicting or outdated values.

IPTTs, quarterly reports, change logs

Responsiveness to Queries

Most partners responded within deadlines. Response quality ranged from comprehensive with audit trails to fragmented replies lacking supporting details.

Well-prepared partners maintained clear response logs. Less prepared organizations gave vague or incomplete explanations.

Clarification emails, response trackers, technical notes

Process Adaptation

Some institutions revised MEL workflows and adapted tools to remote verification protocols. Others continued pre-pandemic practices without adjustments.

Adaptive partners updated AMELPs and MEL instruments. Others applied improvised fixes or lacked process revision.

Revised AMELPs, internal communications

Verification Confidence

Confidence improved when structured documentation supported each indicator. Confidence declined when submissions lacked coherence or key attachments.

Higher confidence in organizations with strong M&E culture. Lower where external technical support was required to clarify submissions.

DQA ratings, partner feedback summaries, MEL dashboards

‘Observed Performance’ describes aggregate trends across the portfolio, while ‘Variation Across Partners’ reflects differences in institutional responses and outcomes documented among IPs.
Most partners used USAID-approved templates and adhered to standard definitions for performance indicators, especially those related to economic growth and service delivery. However, custom indicators, particularly in governance-related interventions, exhibited inconsistent alignment. Partners demonstrated varying capacities to interpret definitions precisely. Gaps emerged in justification for reported numerators and denominators. According to Batini et al., such variability reflects weaknesses in metadata documentation and standardization practices, which are vital for remote assurance . The DQA teams observed that while templates helped enforce structure, alignment success relied heavily on the partner’s familiarity with PIRS and experience with the USAID MEL framework.
Verification of data sources revealed mixed results. Several partners did not establish traceable chains that linked reported values to original documentation. Stronger performers leveraged centralized, digitized archives and submitted metadata with timestamps and cross-references. Others submitted fragmented scanned files without proper naming conventions, which reduced traceability. This echoes Albrecht’s findings that data quality management in remote contexts depends on digital infrastructure, version control, and metadata clarity . Submission matrices and verification logs provided evidence that standardized source documentation remains a weak point under remote formats.
Moderate inconsistencies were detected across quarterly reports, particularly for multi-period indicators. While some partners employed internal review systems, others lacked mechanisms to reconcile changes between submissions. Discrepancies in values across reports, sometimes without clear rationale, indicate the absence of robust quality assurance protocols. Studies by Cai and Zhu and Stvilia et al. emphasize that consistency is an outcome of iterative review, version control, and structured feedback. Partners that embedded these practices presented fewer anomalies. Change logs and revised DQA checklists substantiated this variation in consistency.
Nearly all partners responded to technical clarification requests, though the substance and promptness varied. Some partners replied with clear, well-referenced documentation supported by audit trails. Others submitted incomplete responses or required multiple follow-ups. As noted by Woodall et al., the quality of response during verification affects the reliability of the entire data quality process . Clarification logs and internal notes revealed that structured response systems correlated with stronger verification outcomes. This responsiveness also affected USAID reviewers’ confidence in the integrity of submitted data.
Several partners demonstrated institutional learning and adaptation in response to the remote DQA format. Evidence included updated AMEL plans, use of version control tools, and integration of verification steps into internal reporting cycles. Others maintained pre-COVID procedures and used informal or ad hoc routines. Adaptive partners incorporated feedback loops that led to improved alignment with DQA criteria. This distinction supports the argument made by Hernandez et al. that adaptive MEL systems build resilience through learning . Adaptations documented in revised plans and internal memos confirmed the presence of organizational divergence across partner institutions.
Verification ratings assigned by USAID teams indicated greater confidence in partners with embedded routines, structured submissions, and consistent documentation chains. Those who demonstrated internal MEL protocols received higher assessments, whereas others required remediation and closer follow-up. This pattern aligns with Wilkin et al., who emphasized that accountability structures reinforce trust in remote performance assessments . Internal scoring sheets and feedback summaries showed that confidence levels closely tracked documentation quality and the partner’s institutional culture around verification.
The effectiveness of the remote DQA was closely tied to the degree of collaboration between lead IPs and their sub-implementing partners (Sub-IPs). Partners that fostered coordinated workflows and jointly developed verification responses demonstrated better performance across data quality dimensions. These arrangements enabled Sub-IPs, often constrained by weaker digital infrastructure and limited technical resources, to align documentation with USAID standards. IPs that facilitated shared templates, co-developed AMEL plans, and provided structured feedback mechanisms supported better partner performance. Their submissions showed higher consistency, improved traceability, and stronger alignment with PIRS definitions. In these cases, the remote DQA process functioned not as a unilateral compliance exercise, but as a collaborative verification cycle sustained by internal communication structures. This echoes observations by Quinn et al. and Hilty et al. , who underscored that coherent internal coordination enhances the credibility of remote monitoring systems.
In contrast, DQA processes proved less effective when Sub-IPs operated independently or without technical reinforcement from lead IPs. Several Sub-IPs submitted incomplete or fragmented documentation, often lacking metadata, version control, or structured explanations of indicator logic. Where IPs failed to institutionalize verification support, field-level constraints remained unaddressed. These included staff turnover, limited connectivity, and insufficient training. These weaknesses aligned with findings from Tran et al. and Larson et al. , who emphasized that effective remote engagement requires clear protocols and communicative scaffolding to overcome geographic and systemic barriers. Document reviews confirmed that institutional fragmentation undermined traceability and verification confidence. These challenges were most evident where Sub-IPs lacked familiarity with DQA standards or faced last-minute compliance pressures.
Overall, the research confirmed significant variation in partners’ ability to maintain validity and reliability of reported data. High performers included complete metadata, explanations for numerator-denominator logic, and disaggregation. Others showed weaknesses in traceability or provided insufficient justification for changes. These findings mirror concerns raised in previous literature on remote monitoring in fragile contexts . Document chains can sustain data integrity only when partners uphold disciplined internal practices under remote scrutiny.
Table 5. Synthesis of remote DQA key findings and implications.

Dimension

Observed Performance

Variation Across Partners

Data Quality

Performance varied. Strong performers maintained metadata; others missed disaggregations and lacked justifications.

Structured chains can uphold quality if internal practices remain disciplined under remote conditions.

Institutional Adaptation

Adaptive organizations embedded DQA logic into workflows; others met only the minimum requirements.

The presence of standards holds greater importance than the mere availability of digital tools.

Stakeholder Engagement

Structured submissions enabled productive engagement; unstructured documentation weakened dialogue.

Clear submission protocols and early preparation improve feedback uptake.

Adaptive Management Integration

DQA feedback triggered AMELP revisions and system updates in several cases, reflecting internal learning loops.

Iterative remote DQAs can foster adaptive management in fragile settings.

Remote vs. In-Person Verification

Remote DQAs maintained continuity but could not replicate field-level insights or context-specific validation.

Remote verification is feasible but insufficient on its own; hybrid models offer more robust oversight.

Performance in institutional adaptation varied widely. Organizations that embedded DQA logic into existing workflows managed the transition more effectively. Others treated the exercise as procedural compliance and produced superficial alignment without altering internal systems. Evidence from this research supports the observation by Madon et al. that institutionalization, rather than digital access alone, determines success under constrained conditions . Adaptive institutions demonstrated resilience and integrated verification within their decision-making processes.
The quality of engagement during remote consultations depended on the clarity and structure of submitted documentation. Partners who prepared comprehensive evidence facilitated constructive dialogue with DQA reviewers. In contrast, vague or incomplete submissions hindered exchanges and required repeated clarifications. These findings align with Tran et al., who noted that effective remote engagement hinges on preparation and shared understanding of protocols .
In several cases, DQA results triggered revisions in performance frameworks, data flow systems, and AMEL plans. Partners that showed evidence of learning adjusted indicators or enhanced data documentation systems. This supports the argument of Scarlett and Dutra et al. that iterative monitoring mechanisms contribute to adaptive program delivery. Where applied consistently, remote DQAs served as feedback instruments that strengthened evidence-based planning.
The research found that while remote verification enabled continuity of oversight, it could not replicate field-level context and observational depth. Remote DQAs depended heavily on documentation structure, which varied across partners. The absence of direct observation created blind spots in operational fidelity. As noted by Bastola et al. and Basha et al., remote assessments are feasible, but their reliability hinges on standardized tools and institutional readiness . Hybrid models may offer a more comprehensive solution in fragile settings.
5. Discussion
This research examined the extent to which USAID Somalia’s remote DQA approach preserved oversight functions in a fragile operational context. The results highlighted institutional variation in adapting to remote verification modalities and provided empirical insights into verification standards, adaptive management, and stakeholder engagement. The findings align with broader research priorities in MEL systems, particularly the capacity of development institutions to ensure data integrity amid constraints. This discussion explores six interrelated themes derived from the results: data credibility under remote conditions, IP–Sub-IP dynamics, institutional readiness, stakeholder behavior, adaptive learning systems, and verification trade-offs between remote and in-person modalities.
5.1. Credibility of Data in Remote Conditions
The challenge to preserve data quality in fragile contexts without physical verification remains unresolved in development programs. The results confirmed that many partners followed indicator definitions and used USAID templates, yet traceability gaps and inconsistent metadata undermined full compliance. Such inconsistencies align with prior findings by Batini et al., who emphasized that data validity depends on both technical structure and organizational culture . High-performing partners ensured alignment with PIRS and reflected the value of standardized documentation. However, the presence of incomplete source documentation and version control failures mirrored challenges identified by Cai and Zhu. They warned that decentralized reporting often erodes data integrity when internal quality assurance systems remain weak .
The variability in partner performance suggests that even standardized templates cannot compensate for gaps in capacity or institutional discipline. Albrecht observed that traceability depends not only on technological tools but also on embedded data governance frameworks . USAID’s remote approach revealed that where partners institutionalized audit trails and systematic filing, remote verification achieved credibility. In contrast, weaker approaches required repeated clarification and external assistance, which echoes Herrera and Kapur’s findings that incentives and capabilities jointly shape data quality outcomes .
5.2. IP–Sub-IP Dynamics as a Strategic Determinant
The quality of coordination between IPs and Sub-IPs proved critical to ensuring coherence, traceability, and responsiveness within remote DQA processes. Organizations that fostered integrated workflows, standardized reporting, and joint review mechanisms across the IP–Sub-IP structure were better equipped to meet verification standards. Lead implementers who actively supported Sub-IPs with adapted tools, aligned protocols, and shared routines saw more complete and traceable submissions.
In contrast, weak performance often reflected breakdowns between lead implementers and their Sub-IPs. Centralized protocol enforcement without sufficient technical or procedural reinforcement at the Sub-IP level resulted in fragmented submissions. These included missing metadata and poor alignment with PIRS definitions. Several Sub-IPs lacked the digital readiness or institutional capacity to manage version control, generate audit trails, or respond to verification queries with precision. These deficiencies cannot be attributed to tool absence alone. They signal the absence of deliberate support structures to bridge infrastructure gaps, staff turnover, and internal skill asymmetries.
These findings reinforce Larson et al.’s argument that institutional distance, spatial or structural, requires deliberate frameworks for coherence and engagement . The Somalia experience affirmed that remote DQAs struggle in the absence of such connective infrastructure. Effective IP–Sub-IP integration is not only a technical requirement but a strategic condition for credible data oversight.
5.3. Institutional Coherence and Readiness as Determinants of Remote DQA Performance
The findings confirmed that institutional coherence across implementation tiers was critical to remote DQA effectiveness. High-performing partners treated the verification process as a shared institutional responsibility. Lead implementers provided active support to Sub-IPs through joint tool design, coordinated workflows, and consistent documentation protocols. These arrangements enabled systematic responses to DQA protocols and reinforced accountability across the chain of implementation.
Where partners embedded shared routines and internalized performance standards, documentation showed greater clarity, traceability, and responsiveness. This outcome supports the conclusions of Ba and Barclay, who emphasized that institutional accountability arises from embedded systems rather than isolated technical compliance . The Somalia case further validates Woodall et al. and Wilkin et al., who demonstrated that distributed oversight only succeeds when engagement is structured, transparent, and underpinned by mutual procedural literacy .
Institutional readiness further shaped whether partners treated DQA as a learning opportunity or a compliance exercise. Adaptive organizations revised AMEL plans, updated documentation workflows, and incorporated DQA feedback into their performance systems. As Crowe et al. and Guba and Lincoln highlighted, institutional learning arises when accountability norms inform operational decision-making . Conversely, organizations that relied on ad hoc fixes or viewed DQA as an external requirement displayed signs of procedural stagnation. Their documentation lacked coherence, and engagement often required multiple follow-ups. The Somalia case functioned as a real-time institutional stress test. The case revealed both resilience and fragility in MEL systems operating under constraint. It confirmed Ba’s view that MEL effectiveness depends on institutional routines that match verification procedures with internal decision-making .
5.4. Stakeholder Engagement and Communication Logic
The success of stakeholder engagement in the remote DQA model hinged on the clarity, completeness, and structure of documentation submissions. The findings showed that meaningful remote exchanges were possible only when partners prepared standardized evidence that facilitated constructive dialogue. Where partners submitted fragmented or unclear files, exchanges stalled or required repeated clarification. This aligns with the work of Wilkin et al., who showed that effective digital engagement depends on transparency, clarity in documentation, and shared understanding of expectations .
Remote environments do not allow for physical cues or spontaneous verification, which typically mitigate ambiguity in in-person settings. Consequently, stakeholders had to rely entirely on the explanatory power of submitted documents. As Barclay observed, digital accountability frameworks depend on proactive information structures that enable mutual scrutiny and assurance . The Somalia case confirms this principle. Only when partners adhered to structured chains of custody for data and metadata could reviewers trace reporting logic, provide accurate feedback, and conduct verification with confidence.
Additionally, this research reinforces the conclusions of Quinn et al., who noted that virtual stakeholder interactions succeed when supported by standardized protocols, advance notification, and well-curated communication tools . In the Somalia remote DQA, structured templates, predefined PIRS, and consistent guidance documents acted as mediators of understanding between reviewers and IPs. These instruments partially compensated for the absence of field presence and allowed a shared technical language to emerge. Larson et al. reinforced this view and emphasized that stakeholder engagement in remote contexts depends on structured frameworks that reduce spatial separation and overcome cognitive gaps .
Yet, the findings also revealed limitations. Several partners lacked the digital readiness and institutional literacy to construct submissions that anticipated review needs. In these instances, DQA teams had to invest additional time in clarification loops, which undermined the efficiency of the process. This reflects challenges identified by Tran et al., who showed that in fragile settings, disparities in digital fluency can fragment engagement and entrench information asymmetries . The remote model places disproportionate demands on less-prepared institutions and calls for deliberate capacity investments to ensure equitable participation.
Overall, remote verification models must prioritize not just the technical infrastructure for submission, but also the communicative logic embedded in interactions. A shared logic of engagement, sustained by standardization and transparency, emerged as essential for the remote DQA's effectiveness in constrained contexts.
5.5. Adaptive Management Integration and Institutional Learning
The Somalia remote DQA experience revealed critical insights into how performance verification processes can reinforce adaptive management, especially in fragile settings. Findings indicated that several partners responded to DQA feedback by refining AMEL Plans, strengthening documentation protocols, and adjusting indicator tracking systems. These adaptations suggest that the DQA process did not function as a one-time audit but as a mechanism of iterative organizational learning. This dynamic aligns with the framework of adaptive learning articulated by Prieto-Martin et al., which emphasizes the feedback loop between data systems and institutional response strategies .
Structured DQAs triggered reflection on internal monitoring procedures and encouraged actors to identify performance gaps. As partners received queries, responded to verification demands, and incorporated feedback into operational systems, they demonstrated the capacity to evolve monitoring strategies without direct field engagement. These learning behaviors mirror the “reflexivity” component of MEL system effectiveness described by Ba, wherein institutions not only collect data but use verification moments to reshape their internal logic and improve future performance .
Furthermore, the research observed that partners with prior experience in iterative monitoring frameworks were more likely to embed feedback into ongoing program adjustments. Dutra et al. emphasize that institutional readiness and structured learning cultures increase the likelihood that organizations will use evaluative processes as inputs for reform . In Somalia, those partners revised documentation formats, improved indicator metadata, and adjusted workflows. They used DQA not only to support accountability but also to advance strategic refinement. In contrast, other partners approached the process as a compliance task, offered minimal responses, and did not revise their MEL protocols. This dichotomy echoes the distinction proposed by Aceves-Bueno et al. between adaptive and static systems in fragile governance contexts .
Notably, the remote format appeared to heighten the need for such adaptations. Without site visits to identify and discuss operational weaknesses, partners had to anticipate scrutiny and institutionalize response mechanisms upstream. This proactive logic is central to the success of remote assurance processes. In such settings, verification relies less on corrective action during fieldwork and more on built-in responsiveness to structured oversight. Kagoya and Kibuule reached similar conclusions in their study of data assurance in Ugandan health systems . They observed improved performance where institutions internalized verification procedures rather than treated them as external impositions.
Despite these findings, the research also identifies constraints. Adaptive management gains were uneven and depended on prior institutional strength. Organizations lacking robust MEL culture struggled to translate DQA insights into meaningful revisions. This gap underlines the argument advanced by Stvilia et al., who argue that learning from assurance processes depends not on the availability of tools alone, but on leadership commitment and internal data literacy .
In sum, remote DQAs can catalyze adaptive management when institutions possess or build the internal capability to translate feedback into operational practice. This research supports the emerging consensus that performance verification and program learning are mutually reinforcing under the right enabling conditions.
5.6. Remote Versus In-person Verification: Potentials and Limits
The USAID Somalia remote DQA experience provided a rare opportunity to examine how remote mechanisms perform in the absence of traditional field-level oversight. The investigation did not merely evaluate technological feasibility. It interrogated the capacity of digital workflows to replicate the accountability functions typically enabled by direct field interaction. While the evidence shows that remote DQAs preserved structured verification logic and ensured continuity during a crisis, the findings reveal significant limitations when compared to in-person approaches.
Remote DQAs succeeded in applying core principles of the USAID ADS 201 data quality standards. Structured templates, pre-engagement communication, and clearly defined protocols established a procedural backbone. This framework facilitated systematic document submission and review. Similar successes are highlighted in Basha et al., who demonstrated that remote data workflows can yield credible outputs when managed with rigor and clarity . The Somalia experience confirmed this conclusion. Partners who followed USAID guidelines and upheld metadata discipline enabled reviewers to assess indicator alignment and data traceability with confidence.
However, the absence of physical verification created blind spots. For example, DQA teams could not validate field-level operational realities such as data collection practices in remote areas, storage conditions for source documents, or the functionality of regional and local monitoring systems. These contextual dimensions are routinely observed during site visits. In remote DQAs, they were either inferred from secondary evidence or omitted. As observed by Bastola et al., digital methods can achieve reliability in specific areas . However, they fall short in capturing environmental and human dynamics, which often inform qualitative assessments of data credibility.
Furthermore, the Somalia case illustrated that remote methods depend heavily on partner capacity. Some organizations lacked internal quality assurance systems. Others misunderstood verification expectations. In these cases, the remote DQA process struggled to produce reliable assessments. Without physical proximity to guide or mentor field staff, the process relied on the ability of IPs to self-diagnose and improve practices. As Tran et al. argued, remote stakeholder engagement presumes a level of institutional maturity that cannot always be assumed in fragile settings . Consequently, remote verification may reproduce or even widen performance gaps if unaccompanied by tailored support mechanisms.
There is also the challenge of verifying context-specific assumptions embedded in indicator narratives. For example, reported achievements in policy advocacy or community engagement often rest on subjective interpretations of influence and reach. Partners submitted supporting documents. However, assessors could not interview local actors or directly observe institutional dynamics. This limitation reduced their ability to triangulate claims. Similar concerns are raised by Roberts et al., who found that qualitative dimensions of program performance are more difficult to assess remotely. This is especially true where documentary evidence is sparse or strategically curated .
Nevertheless, the research confirms that remote verification can serve as a valuable interim solution in settings where access remains restricted. When combined with structured protocols, iterative feedback, and strong partner coordination, it can uphold a baseline of accountability. As noted by Barclay, remote systems can generate scrutability and assurance when embedded in transparent workflows and supported by adaptive institutions . The Somalia remote DQA illustrates the effectiveness of disciplined remote review. It enables the identification of data inconsistencies, prompts corrective action, and preserves essential monitoring functions under severe constraints.
Yet, this format should not be viewed as a substitute for in-person verification in the long term. Rather, it complements it. Hybrid models that merge remote protocols with targeted field validation represent the most promising path forward. This layered approach aligns with the recommendations of Greenhalgh et al. They advocate for complex adaptive systems that combine digital convenience with a grounded understanding of context. .
In conclusion, the evidence shows that remote DQAs are both technically feasible and strategically useful in crisis settings. However, they cannot match the diagnostic depth of on-site verification. Institutional decision-makers should treat remote MEL methods as a complementary tool, integrated within a flexible and context-sensitive assurance strategy.
6. Conclusions
The remote DQA conducted in Somalia confirmed its value as a viable approach to maintain data integrity and oversight in fragile and non-permissive settings. It enabled USAID to maintain verification responsibilities despite travel restrictions and field access constraints during the COVID-19 pandemic. Structured document reviews, combined with virtual technical exchanges, helped identify gaps, assess indicator compliance, and uphold performance standards. As Basha et al. emphasize, remote methods can ensure continuity and credibility when field engagement is not feasible .
The DQA results underlined both the strengths and limits of remote processes. Institutional engagement improved where partners followed structured guidance and maintained metadata and source traceability. However, the absence of direct observation limited the depth of contextual validation. Remote formats alone could not detect field-level operational barriers or informal data-handling practices. As Woodall et al. and Puttkammer et al. argue, remote mechanisms require complementary systems to capture the complexity of field realities.
To enhance future remote DQA effectiveness, development partners should invest in partner capacity to manage documentation and apply verification standards. Standard templates and digital protocols must be tailored to remote environments. Weiskopf et al. and Kim et al. confirm that structured digital tools and training improve consistency and reduce error rates in remote evaluations .
Effective coordination between IPs and Sub-IPs constitutes a structural requirement for credible MEL system performance in fragile contexts. It secures data integrity, enables consistent application of verification standards, and reinforces oversight across implementation tiers. Institutions that embedded this alignment demonstrated stronger compliance and resilience. Donors and program leaders must prioritize formal mechanisms that define roles, establish accountability pathways, and ensure vertical coherence. Such integration allows remote verification models to function reliably amid institutional constraints and operational uncertainty.
The Somalia case demonstrates that successful remote DQAs depend not only on technical standardization but also on the institutional relationships that mediate implementation. Lead partners must structure meaningful collaboration with Sub-IPs to align workflows, co-develop verification logic, and embed mutual accountability. As Tran et al. and Quinn et al. highlighted, equitable participation under remote oversight models requires investments in communication infrastructure and stakeholder coordination . Development agencies should adopt hybrid MEL frameworks that integrate technical protocols with robust intra-organizational support systems. This dual emphasis ensures that local actors remain fully engaged in the transition to digital verification.
Hybrid DQA models provide a strategic opportunity to balance cost-effectiveness and contextual rigor. Targeted field visits, aligned with remote data reviews, can verify assumptions, identify systemic risks, and build institutional trust. Zuniga-Teran et al. note that combining remote and in-person tools improves both engagement and learning . In contexts like Somalia, hybrid models can reinforce credibility without undermining safety or efficiency.
Development organizations should establish remote and hybrid DQAs as standard modalities in fragile and high-risk contexts. This requires formalization of digital protocols, enhancement of partner-facing infrastructure, and creation of feedback loops that reinforce accountability and institutional learning. Agencies should position these models as core pillars of MEL system resilience. They should embed these models within routine oversight frameworks and treat them as permanent institutional mechanisms, not as temporary responses.
Future studies should conduct comparative analyses of hybrid verification models, participatory approaches, and cost-effectiveness dimensions of remote systems. These research avenues would deepen understanding of how MEL systems withstand operational stress while maintaining data integrity and stakeholder trust. Such evidence can guide the design of verification systems that balance flexibility, rigor, and contextual responsiveness. This balance is essential to ensure effectiveness in demanding operational environments.
Abbreviations

DQA

Data Quality Assessment

ADS

Automated Directives System

AMEL

Activity Monitoring Evaluation and Learning

IPs

Implementing Partners

IPTT

Indicator Performance Tracking Table

MEL

Monitoring, Evaluation, and Learning

PIRS

Performance Indicator Reference Sheets

PPR

Performance Plan Report

Sub-IPs

Sub Implementing Partners

UNDP

United Nations Development Programme

USAID

United States Agency for International Development

Acknowledgments
We are grateful to USAID.
Author Contributions
Abdourahmane Ba: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.
Tom Muga: Conceptualization, Investigation, Methodology, Project administration, Supervision, Validation, Writing – review & editing.
Patrick Okwarah: Conceptualization, Data curation, Formal Analysis, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing.
Mohamed Ali: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing.
Data Availability Statement
The data is available from the authors upon reasonable request.
Conflicts of Interest
The authors declare no conflicts of interest.
References
[1] Hilhorst, D., & Mena, R. (2021). When Covid‐19 meets conflict: Politics of the pandemic response in fragile and conflict‐affected states. Disasters, 45(S1), S 126–S 147.
[2] Hur Hassnain, L. K. Simona Somma, eds. 2021. Evaluation in Contexts of Fragility, Conflict and Violence: Guidance from Global Evaluation Practitioners. Exeter, UK: IDEAS.
[3] Rodo, M., Singh, L., Russell, N., & Singh, N. S. (2022). A mixed methods study to assess the impact of COVID-19 on maternal, newborn, child health and nutrition in fragile and conflict-affected settings. Conflict and health, 16(1), 30.
[4] Kelly, L. M., Goodall, J., & Lombardi, L. (2022). Developing a monitoring and evaluation framework in a humanitarian non-profit organisation using agile methodology. Disaster Prevention and Management: An International Journal, 31(5), 536-549.
[5] Ba, A. (2021). How to measure monitoring and evaluation system effectiveness? African Evaluation Journal, 9(1), a 553.
[6] Okhmatovskiy, I., & David, R. J. (2012). Setting your own standards: Internal corporate governance codes as a response to institutional pressure. Organization Science, 23(1), 155-176.
[7] USAID. (2020). Automated Directives System (ADS) Chapter 201: Operational policy for program cycle. United States Agency for International Development.
[8] White, L., Lockett, A., Currie, G., & Hayton, J. (2021). Hybrid context, management practices and organizational performance: A configurational approach. Journal of Management Studies, 58(3), 718-748.
[9] Kabonga, I. (2018). Principles and practice of monitoring and evaluation: A paraphernalia for effective development. Africanus: Journal of Development Studies, 48(2), 21-pages.
[10] Silva, V., Akkar, S., Baker, J., Bazzurro, P., Castro, J. M., Crowley, H., & Vamvatsikos, D. (2019). Current challenges and future trends in analytical fragility and vulnerability modeling. Earthquake Spectra, 35(4), 1927-1952.
[11] Canter, L., & Atkinson, S. F. (2010). Adaptive management with integrated decision making: an emerging tool for cumulative effects management. Impact Assessment and Project Appraisal, 28(4), 287-297.
[12] Albrecht, R. (2021). A Framework for Data Quality Management in the Delivery & Consultancy of CRM Platforms (Master's thesis).
[13] Stvilia, B., Pang, Y., Lee, D. J., & Gunaydin, F. (2025). Data quality assurance practices in research data repositories—A systematic literature review. An Annual Review of Information Science and Technology (ARIST) paper. Journal of the Association for Information Science and Technology, 76(1), 238-261.
[14] Biagi, V., & Russo, A. (2022). Data Model Design to Support Data-Driven IT Governance Implementation. Technologies 2022, 10, 106.
[15] Al-Qadi, M. M. M. (2023). Advancing water resources management in arid regions through stakeholder engagement, digitalization, and policy integration: Jordan as a case study (Doctoral dissertation, Technische Universität München).
[16] Cai, L., & Zhu, Y. (2015). The challenges of data quality and data quality assessment in the big data era. Data science journal, 14, 2-2.–10.
[17] Carment, D., Muñoz, K., & Samy, Y. (2020). Fragile and conflict-affected states in the age of COVID 19.
[18] Ibrahim, A. M., Gusau, A. L., & Uba, S. (2022). Proposing Internet-driven alternative pedagogical system for use in teaching and learning during and beyond the COVID-19 pandemic. International Journal of Media and Information Literacy, 7(1), 118-131.
[19] Batini, C., Cappiello, C., Francalanci, C., & Maurino, A. (2009). Methodologies for data quality assessment and improvement. ACM computing surveys (CSUR), 41(3), 1-52.
[20] Dutra, L. X., Ellis, N., Perez, P., Dichmont, C. M., De La Mare, W., & Boschetti, F. (2014). Drivers influencing adaptive management: a retrospective evaluation of water quality decisions in South East Queensland (Australia). Ambio, 43, 1069-1081.
[21] Woodall, P., Borek, A., & Parlikad, A. K. (2013). Data quality assessment: the hybrid approach. Information & management, 50(7), 369-382.
[22] Kim, S., Pérez-Castillo, R., Caballero, I., & Lee, D. (2022). Organizational process maturity model for IoT data quality management. Journal of Industrial Information Integration, 26, 100256.
[23] Saleh, F. I. M., & Karia, N. (2024). Value-driven Management for International Development and Aid Projects. Springer.
[24] Yin, R. K. (2018). Case study research and applications: Design and methods. SAGE Publications.
[25] Crowe, M., Inder, M., & Porter, R. (2015). Conducting qualitative research in mental health: Thematic and content analyses. Australian & New Zealand Journal of Psychiatry, 49(7), 616-623.
[26] Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. Handbook of qualitative research, 2(163-194), 105.
[27] Silverman, D. (2013). Doing qualitative research: A practical handbook. SAGE Publications.
[28] Armstrong, C. (2021). Key methods used in qualitative document analysis. OSF Preprints, 1(9).
[29] Morgan, H. (2022). Conducting a qualitative document analysis. The qualitative report, 27(1), 64-77.
[30] Wood, L. M., Sebar, B., & Vecchio, N. (2020). Application of rigour and credibility in qualitative document analysis: Lessons learnt from a case study. The qualitative report, 25(2), 456-470.
[31] Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative research journal, 9(2), 27-40.
[32] Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & health sciences, 15(3), 398-405.
[33] Pianese, T., Errichiello, L., & da Cunha, J. V. (2023). Organizational control in the context of remote working: A synthesis of empirical findings and a research agenda. European Management Review, 20(2), 326-345.
[34] Cho, J. Y., & Lee, E. H. (2014). Reducing confusion about grounded theory and qualitative content analysis: Similarities and differences. Qualitative report, 19(32).
[35] United Nations Development Programme (UNDP). (2021). Guidelines for remote monitoring in fragile contexts. Available at:
[36] Neuendorf, K. A. (2018). Content analysis and thematic analysis. In Advanced research methods for applied psychology (pp. 211-223). Routledge.
[37] O’Leary, Z. (2004). The essential guide to doing research. SAGE Publications.
[38] Gregar, J. (2023). Research design (qualitative, quantitative and mixed methods approaches). Research Design, 8.
[39] Matsiliza, N. S. (2019). Strategies to improve capacity for policy monitoring and evaluation in the public sector. Journal of Reviews on Global Economics, 8, 490-499.
[40] Riemenschneider, N., McConnell, J. & Shejavali, K. (2021). Assessing and enhancing government data quality, from theory to practice. Oxford Policy Management.
[41] Hernandez, K., Ramalingam, B., & Wild, L. (2019). Towards evidence-informed adaptive management. ODI Working Paper 565. London: ODI.
[42] Wilkin, C. L., Campbell, J., Moore, S., & Simpson, J. (2018). Creating value in online communities through governance and stakeholder engagement. International Journal of Accounting Information Systems, 30, 56-68.
[43] Quinn, N. W., Sridharan, V., Ramirez-Avila, J., Imen, S., Gao, H., Talchabhadel, R., & McDonald, W. (2022). Applications of GIS and remote sensing in public participation and stakeholder engagement for watershed management.
[44] Hilty, D. M., Armstrong, C. M., Luxton, D. D., Gentry, M. T., & Krupinski, E. A. (2021). A scoping review of sensors, wearables, and remote monitoring for behavioral health: uses, outcomes, clinical competencies, and research directions. Journal of Technology in Behavioral Science, 6(2), 278-313.
[45] Tran, N. Q., Carden, L. L., & Zhang, J. Z. (2022). Work from anywhere: remote stakeholder management and engagement. Personnel Review, 51(8), 2021-2038.
[46] Larson, S., Measham, T. G., & Williams, L. J. (2010). Remotely engaged? Towards a framework for monitoring the success of stakeholder engagement in remote regions. Journal of environmental planning and management, 53(7), 827-845.
[47] Barclay, I. (2022). Providing verifiable oversight for scrutability, assurance, and accountability in data-driven systems. Cardiff University. Available at:
[48] Price, R. (2017). Approaches to remote monitoring in fragile states. Governance and Social.
[49] Madon, S., Reinhard, N., Roode, D., & Walsham, G. (2009). Digital inclusion projects in developing countries: Processes of institutionalization. Information technology for development, 15(2), 95-107.
[50] Scarlett, L. (2013). Collaborative adaptive management: challenges and opportunities. Ecology and Society, 18(3).
[51] Bastola, M., Locatis, C., & Fontelo, P. (2021). Diagnostic reliability of in-person versus remote dermatology: a meta-analysis. Telemedicine and e-Health, 27(3), 247-250.
[52] Basha, S. A., Cai, Q., Lee, S., Tran, T., Majerle, A., Tiede, S., & Gewirtz, A. H. (2024). Does Being In-Person Matter? Demonstrating the Feasibility and Reliability of Fully Remote Observational Data Collection. Prevention Science, 1-12.
[53] Herrera, Y. M., & Kapur, D. (2007). Improving data quality: Actors, incentives, and capabilities. Political Analysis, 15(4), 365-386.
[54] Prieto-Martin, P., Apgar, M., & Hernandez, K. (2020). Adaptive management in SDC: Challenges and opportunities. Institute of Development Studies.
[55] Aceves-Bueno, E., Adeleye, A. S., Bradley, D., Tyler Brandt, W., Callery, P., Feraud, M., & Tague, C. (2015). Citizen science as an approach for overcoming insufficient monitoring and inadequate stakeholder buy-in in adaptive management: criteria and evidence. Ecosystems, 18, 493-506.
[56] Kagoya, H. R., & Kibuule, D. (2018). Quality assurance of health management information system in Kayunga district, Uganda. African Evaluation Journal, 6(2), 1-11.
[57] Roberts, J., Onuegbu, C., Harris, B., Clark, C., Griffiths, F., Seers, K., & Boardman, F. (2025). Comparing In-Person and Remote Qualitative Data Collection Methods for Data Quality and Inclusion: A Scoping Review. International Journal of Qualitative Methods, 24, 16094069251316745.
[58] Greenhalgh, T., Rosen, R., Shaw, S. E., Byng, R., Faulkner, S., Finlay, T., & Wood, G. W. (2021). Planning and evaluating remote consultation services: a new conceptual framework incorporating complexity and practical ethics. Frontiers in digital health, 3, 726095.
[59] Puttkammer, N., Baseman, J. G., Devine, E. B., Valles, J. S., Hyppolite, N., Garilus, F., & Barnhart, S. (2016). An assessment of data quality in a multi-site electronic medical record system in Haiti. International journal of medical informatics, 86, 104-116.
[60] Weiskopf, N. G., Bakken, S., Hripcsak, G., & Weng, C. (2017). A data quality assessment guideline for electronic health record data reuse. Egems, 5(1), 14.
[61] Zuniga-Teran, A. A., Fisher, L. A., Meixner, T., Le Tourneau, F. M., & Postillion, F. (2022). Stakeholder participation, indicators, assessment, and decision-making: applying adaptive management at the watershed scale. Environmental Monitoring and Assessment, 194(3), 156.
Cite This Article
  • APA Style

    Ba, A., Muga, T., Okwarah, P., Ali, M. (2025). Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19. Social Sciences, 14(4), 315-331. https://doi.org/10.11648/j.ss.20251404.13

    Copy | Download

    ACS Style

    Ba, A.; Muga, T.; Okwarah, P.; Ali, M. Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19. Soc. Sci. 2025, 14(4), 315-331. doi: 10.11648/j.ss.20251404.13

    Copy | Download

    AMA Style

    Ba A, Muga T, Okwarah P, Ali M. Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19. Soc Sci. 2025;14(4):315-331. doi: 10.11648/j.ss.20251404.13

    Copy | Download

  • @article{10.11648/j.ss.20251404.13,
      author = {Abdourahmane Ba and Tom Muga and Patrick Okwarah and Mohamed Ali},
      title = {Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19
    },
      journal = {Social Sciences},
      volume = {14},
      number = {4},
      pages = {315-331},
      doi = {10.11648/j.ss.20251404.13},
      url = {https://doi.org/10.11648/j.ss.20251404.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ss.20251404.13},
      abstract = {Organizations operating under fragile contexts often struggle to uphold data quality standards due to insecurity, institutional fragmentation, and limited field access. The COVID-19 pandemic intensified these constraints. It suspended the possibility of direct verification and posed critical questions about the integrity of performance oversight. This research investigates whether remote Data Quality Assessments (DQAs) preserved accountability and verification rigor during this period of operational stress. The research adopts a qualitative case study design and analyzes the remote DQA model implemented across the USAID Somalia portfolio in 2020. The analysis relies on reporting documents, standardized templates, verification protocols, and technical feedback archives to evaluate performance across five data quality dimensions and examine the remote DQA process. It references peer-reviewed studies, donor publications, and evaluation reports from Somalia and similar fragile settings to support contextual interpretation and enable cross-case insight. The research applies thematic content analysis and triangulated document review to assess institutional behavior and the resilience of monitoring systems under constraint. The findings confirm that remote DQAs enabled continuity of oversight and preserved structured verification logic. However, performance in institutional adaptation varied. The research reveals that remote models depend heavily on partner capacity and documentation clarity. Coordination between implementing partners and sub-implementing partners emerged as a strategic determinant of remote verification success. While remote DQAs allowed accountability in non-permissive settings, they could not replicate the contextual depth and diagnostic precision of field-based assessments. The absence of observational evidence hindered the detection of informal practices and constrained verification confidence. The research concludes that remote verification models offer a viable response to operational disruption, but they cannot substitute for the comprehensiveness of hybrid approaches. Hybrid models that combine remote reviews with targeted field visits, once embedded within institutional frameworks, offer a strategic path to reinforce system resilience in fragile and constrained settings. Somalia’s experience highlights the need for donors and implementing partners to institutionalize adaptive oversight mechanisms capable of maintaining data quality under fragility and stress.
    },
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Remote Data Verification Under Fragility and Operational Stress: Insights from Somalia During COVID-19
    
    AU  - Abdourahmane Ba
    AU  - Tom Muga
    AU  - Patrick Okwarah
    AU  - Mohamed Ali
    Y1  - 2025/06/30
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ss.20251404.13
    DO  - 10.11648/j.ss.20251404.13
    T2  - Social Sciences
    JF  - Social Sciences
    JO  - Social Sciences
    SP  - 315
    EP  - 331
    PB  - Science Publishing Group
    SN  - 2326-988X
    UR  - https://doi.org/10.11648/j.ss.20251404.13
    AB  - Organizations operating under fragile contexts often struggle to uphold data quality standards due to insecurity, institutional fragmentation, and limited field access. The COVID-19 pandemic intensified these constraints. It suspended the possibility of direct verification and posed critical questions about the integrity of performance oversight. This research investigates whether remote Data Quality Assessments (DQAs) preserved accountability and verification rigor during this period of operational stress. The research adopts a qualitative case study design and analyzes the remote DQA model implemented across the USAID Somalia portfolio in 2020. The analysis relies on reporting documents, standardized templates, verification protocols, and technical feedback archives to evaluate performance across five data quality dimensions and examine the remote DQA process. It references peer-reviewed studies, donor publications, and evaluation reports from Somalia and similar fragile settings to support contextual interpretation and enable cross-case insight. The research applies thematic content analysis and triangulated document review to assess institutional behavior and the resilience of monitoring systems under constraint. The findings confirm that remote DQAs enabled continuity of oversight and preserved structured verification logic. However, performance in institutional adaptation varied. The research reveals that remote models depend heavily on partner capacity and documentation clarity. Coordination between implementing partners and sub-implementing partners emerged as a strategic determinant of remote verification success. While remote DQAs allowed accountability in non-permissive settings, they could not replicate the contextual depth and diagnostic precision of field-based assessments. The absence of observational evidence hindered the detection of informal practices and constrained verification confidence. The research concludes that remote verification models offer a viable response to operational disruption, but they cannot substitute for the comprehensiveness of hybrid approaches. Hybrid models that combine remote reviews with targeted field visits, once embedded within institutional frameworks, offer a strategic path to reinforce system resilience in fragile and constrained settings. Somalia’s experience highlights the need for donors and implementing partners to institutionalize adaptive oversight mechanisms capable of maintaining data quality under fragility and stress.
    
    VL  - 14
    IS  - 4
    ER  - 

    Copy | Download

Author Information
  • Business Science Institute, Iaelyon School of Management, Lyon, France

    Biography: Abdourahmane Ba, Statistician Engineer (ESEA-Dakar) and Doctor of Business Administration (BSI–IAE Lyon 3 Jean Moulin), has over 20 years of experience in public policy, evaluation, MEL systems, and development program management. He has led major programs and studies across Africa. A published researcher, he has authored peer-reviewed articles and books on MEL effectiveness, data quality, and policy evaluation. His expertise combines advanced analytics with institutional insight to inform decision-making, support reform, and advance inclusive development. Dr. Ba is widely recognized for strengthening learning systems and driving evidence-based public policy. He lives in Dakar, Senegal, at Villa 789, Grand Mbao.

    Research Fields: Monitoring and evaluation system, Knowledge management and evidence-based decision making, Development program evaluations, Public policy evaluation, Third-Party monitoring in constrained settings, Data quality management, Economic growth, Education.

  • Independent Researcher, International Development Professional, Nairobi, Kenya

    Biography: Tom Muga is an independent international development researcher and MEL specialist based in Nairobi, Kenya who holds an MSc Information Systems, and a BSc in Information Sciences. He has over 20 years of experience leading MEL systems, digital transformation initiatives, and program oversight for donor-funded projects across Sub-Saharan Africa. His work focuses on data quality assurance, adaptive management, and institutional resilience in fragile and conflict-affected settings.

    Research Fields: Monitoring and evaluation system, Knowledge management and evidence-based decision making, Development program evaluations, Public policy evaluation, Finance and operations management, Third-Party monitoring in constrained settings.

  • Department of Community Health, Amref International University, Nairobi, Kenya

    Biography: Patrick Okwarah is a Nairobi-based public health researcher with over 13 years of experience in mixed-methods research across conflict-affected and underserved settings. He holds a Master’s in Public Health and a BSc in Biochemistry, and is pursuing a PhD in Epidemiology. His expertise spans substance use, forced displacement, gender-based violence, and health systems strengthening. He works with Amref International University and serves as a Scientific Editor at Primary Health Care Practice Journal. Patrick has contributed to several donor-funded programs across Africa and the Eastern Mediterranean, covering the full research cycle from grant writing to policy influence.

    Research Fields: Health program and health systems, Qualitative methods and techniques, Program evaluation, Monitoring and evaluation system, Knowledge management, Data quality management.

  • Monitoring Evaluation Research and Learning, Alma Consult, Mogadishu, Somalia

    Biography: Mohamed Ali, a specialist in monitoring, evaluation, research, and learning (MERL), holds an MA in Economic Policy Management and a BA in Economics and Finance. With over ten years of experience, he has led third-party monitoring, evaluations, and adaptive programming across Somalia, Kenya, and Ethiopia. Mohamed has worked with FCDO, USAID, the EU, UN agencies, GIZ, and the World Bank. He is also the founder of Alma Consult, a regional research and MERL firm. Airport Road, Wadajir district, Mogadishu, Somalia.

    Research Fields: Monitoring and evaluation system, Knowledge management and evidence-based decision making, Development program evaluations, Public policy evaluation, Third-Party monitoring in constrained settings, Data quality management.

  • Abstract
  • Keywords
  • Document Sections

    1. 1. Introduction
    2. 2. Literature Review
    3. 3. Materials and Methods
    4. 4. Results
    5. 5. Discussion
    6. 6. Conclusions
    Show Full Outline
  • Abbreviations
  • Acknowledgments
  • Author Contributions
  • Data Availability Statement
  • Conflicts of Interest
  • References
  • Cite This Article
  • Author Information