The document describes a Data Quality Review (DQR) framework for assessing the quality of health facility data. The DQR is a multi-pronged approach that builds on previous data quality assessment tools to evaluate data quality in a more systematic way. It includes a desk review of existing health facility data and indicators as well as facility surveys to verify data accuracy and assess health management information systems. The DQR framework provides a standardized method for stakeholders to evaluate routine health data quality and link these findings to health sector planning activities.
The document discusses data quality reviews (DQR) which have two components: data verification and system assessment. Data verification examines reporting accuracy by reviewing source documents and comparing recounted data to reported data. System assessment reviews the adequacy of data collection, reporting, analysis and use. Facilities and districts are surveyed. Data verification calculates a verification factor to indicate over or under-reporting. System assessment identifies areas of weakness like lack of guidelines, training or supervision. Findings are analyzed by facility type, ownership, location to identify areas for improvement.
This document discusses using statistics, databases, and mobile phones to enhance election monitoring through parallel vote tabulation (PVT). PVT involves citizens independently collecting official results from individual polling stations, transmitting the data via SMS to a central database, and analyzing the aggregated data to verify official results or suggest the true outcome. The document outlines how statistics can focus monitoring, databases can manage monitor and observation data, and SMS can speed information sharing. It provides an example of CODEO/CDD-Ghana's successful 2008 presidential election PVT that sampled 1,070 polling stations and transmitted 90% of observer data via SMS.
Building New Institutional Capacity in M&E: The Experience of National AIDS C...MEASURE Evaluation
The document discusses capacity building efforts of the National AIDS Coordinating Authority of Nigeria to strengthen monitoring and evaluation systems. It describes how various assessment tools were used to identify gaps and priorities for strengthening data quality, monitoring, and evaluation. Specific interventions included the M&E Strengthening and Sustainability Toolkit, data quality assessments, training, and quarterly mentoring to build capacity at national and regional levels. The efforts helped establish standardized data collection and reporting, improve data quality and use, and create institutional memory to support effective HIV/AIDS programs.
IQChart is a patient management database that collects clinical data from HIV/AIDS patients to generate accurate monthly and quarterly reports for monitoring and evaluation. It was developed by AIDS Relief and ICAP to computerize paper-based patient registers and improve data analysis and clinical decision making. The tool is freely available, open source software that is used in over 90 treatment facilities in Rwanda to track over 54,000 patients. Future plans include integrating geographic information system mapping capabilities to help identify underserved areas and monitor program outcomes.
GLI TB Diagnostics Connectivity Guide 2016SystemOne
This document provides an overview of diagnostics connectivity solutions for tuberculosis (TB) programs. It discusses how connectivity solutions can enable remote monitoring of diagnostic devices, automatically send test results to clinicians and health information systems, facilitate inventory management, and enhance disease surveillance and program monitoring. The document also covers the necessary software, hardware, internet connectivity, data security, personnel needs, and budgeting considerations for implementing diagnostics connectivity solutions. Overall, the document presents connectivity solutions as a way for TB programs to improve patient care and management while strengthening laboratory systems.
GxAlert Monitors and Reduces High Testing Error Rates in Nigeria's GeneXpert ...SystemOne
Presentation on the use of SystemOne's GxAlert tool in Nigeria, for monitoring reducing diagnostic errors and accelerating positive outcomes in TB.
Published courtesy of Kehinde Jimoh Agbaiyero, Senior Technical Advisor - TB, Abt Associates
Skyhook's Location Platform provides precision positioning at a global scale using a variety of data sources. It collects sensor data from over 1 billion points including WiFi access points and cell towers across more than 200 countries. This data covers over 850 million people and results in many billions of location transactions per month. Skyhook fuses data from proprietary scanning devices, consumer devices, and third-party sources. It applies techniques such as detecting pathological data, characterizing source quality, and building algorithms to handle variable quality data. The system also continuously reviews and refreshes its data and algorithms to account for changes over time.
The document discusses data quality reviews (DQR) which have two components: data verification and system assessment. Data verification examines reporting accuracy by reviewing source documents and comparing recounted data to reported data. System assessment reviews the adequacy of data collection, reporting, analysis and use. Facilities and districts are surveyed. Data verification calculates a verification factor to indicate over or under-reporting. System assessment identifies areas of weakness like lack of guidelines, training or supervision. Findings are analyzed by facility type, ownership, location to identify areas for improvement.
This document discusses using statistics, databases, and mobile phones to enhance election monitoring through parallel vote tabulation (PVT). PVT involves citizens independently collecting official results from individual polling stations, transmitting the data via SMS to a central database, and analyzing the aggregated data to verify official results or suggest the true outcome. The document outlines how statistics can focus monitoring, databases can manage monitor and observation data, and SMS can speed information sharing. It provides an example of CODEO/CDD-Ghana's successful 2008 presidential election PVT that sampled 1,070 polling stations and transmitted 90% of observer data via SMS.
Building New Institutional Capacity in M&E: The Experience of National AIDS C...MEASURE Evaluation
The document discusses capacity building efforts of the National AIDS Coordinating Authority of Nigeria to strengthen monitoring and evaluation systems. It describes how various assessment tools were used to identify gaps and priorities for strengthening data quality, monitoring, and evaluation. Specific interventions included the M&E Strengthening and Sustainability Toolkit, data quality assessments, training, and quarterly mentoring to build capacity at national and regional levels. The efforts helped establish standardized data collection and reporting, improve data quality and use, and create institutional memory to support effective HIV/AIDS programs.
IQChart is a patient management database that collects clinical data from HIV/AIDS patients to generate accurate monthly and quarterly reports for monitoring and evaluation. It was developed by AIDS Relief and ICAP to computerize paper-based patient registers and improve data analysis and clinical decision making. The tool is freely available, open source software that is used in over 90 treatment facilities in Rwanda to track over 54,000 patients. Future plans include integrating geographic information system mapping capabilities to help identify underserved areas and monitor program outcomes.
GLI TB Diagnostics Connectivity Guide 2016SystemOne
This document provides an overview of diagnostics connectivity solutions for tuberculosis (TB) programs. It discusses how connectivity solutions can enable remote monitoring of diagnostic devices, automatically send test results to clinicians and health information systems, facilitate inventory management, and enhance disease surveillance and program monitoring. The document also covers the necessary software, hardware, internet connectivity, data security, personnel needs, and budgeting considerations for implementing diagnostics connectivity solutions. Overall, the document presents connectivity solutions as a way for TB programs to improve patient care and management while strengthening laboratory systems.
GxAlert Monitors and Reduces High Testing Error Rates in Nigeria's GeneXpert ...SystemOne
Presentation on the use of SystemOne's GxAlert tool in Nigeria, for monitoring reducing diagnostic errors and accelerating positive outcomes in TB.
Published courtesy of Kehinde Jimoh Agbaiyero, Senior Technical Advisor - TB, Abt Associates
Skyhook's Location Platform provides precision positioning at a global scale using a variety of data sources. It collects sensor data from over 1 billion points including WiFi access points and cell towers across more than 200 countries. This data covers over 850 million people and results in many billions of location transactions per month. Skyhook fuses data from proprietary scanning devices, consumer devices, and third-party sources. It applies techniques such as detecting pathological data, characterizing source quality, and building algorithms to handle variable quality data. The system also continuously reviews and refreshes its data and algorithms to account for changes over time.
This document summarizes the implementation of GxAlert, a system for real-time monitoring of GeneXpert machines and TB/HIV data, in Nigeria. It provides background on Nigeria's high TB and HIV burdens. It describes how over 280 of 377 GeneXpert sites have been connected to GxAlert. Expected benefits include real-time test results and resistance detection, early warnings on machine issues, and timely reporting. Challenges discussed include reluctance to share access, need for user guides and support. The next phase aims to sustain connectivity, integrate across systems, and connect additional diagnostics.
Presentation given at the USAID SQALE Symposium, Bridging the Quality Gap - Strengthening Quality Improvement in Community Health Services, by Prisca Muange on behalf of USAID Assist. http://usaidsqale.reachoutconsortium.org/
ASLM- Alere: Importance of Quality SystemsSystemOne
This document discusses the importance of quality systems and connectivity for decentralized HIV testing. It notes that point-of-care testing presents challenges for quality control compared to centralized laboratories due to harsher environments, less skilled operators, and fewer resources. However, connected diagnostic platforms can help by providing continuous quality monitoring, internal quality control, external quality assessment, and key performance/quality indicators. The document highlights a pilot project in Zimbabwe using an integrated laboratory network for automated quality monitoring and external quality assessment reporting. It concludes that connectivity is essential to ensure successful scale-up of testing and maximize return on investment as testing expands to more facilities.
Healthcare organizations facing migration to a new EHR will want to stop to consider these issues to ensure patient safety, satisfaction and clinical adoption.
The document introduces GSK's approach to implementing a risk-based monitoring (RBM) strategy using a new RBM tool. The tool analyzes data from multiple sources to generate risk indicators and overall risk scores for clinical trial sites. It then produces monitoring activity plans to guide monitors to focus on high-risk sites and activities. The tool aims to make monitoring more data-driven and risk-focused compared to traditional schedule-based approaches. A demo of the tool shows how it can detect changes in sites' risk scores over time and identify sites for further central review and discussion. Some initial challenges in implementation were confusion around the tool's purpose and indicators. Feedback indicated the tool helps prioritize monitoring efforts and drives more data-driven conversations
DIA 2014 Risk Based Monitoring - Neill BarronNeill Barron
This document discusses GSK's implementation of a risk-based monitoring (RBM) strategy using risk assessment tools. It provides 3 key points:
1) GSK uses an RBM technology that provides a consolidated "helicopter view" of risk across study sites, allowing them to drill down into risk signals and drive targeted monitoring interventions.
2) The technology uses core risk-based indicators along with study-specific indicators to identify risks at sites related to performance, data quality, and clinical data. Real study examples show how the data is used to determine required actions.
3) Benefits of the RBM strategy include improved study and data quality from early risk detection and more efficient monitoring. Challenges include
Use of Visualisations to Optimise Clinical Trials - Neill BarronNeill Barron
Spotfire is a data visualization tool that allows clinical trial teams to link and visualize data from multiple sources in one place. It enables early identification of trends, patterns and outliers through graphical visualizations to accelerate decision making and enhance study quality with reduced costs. Key benefits of Spotfire include increasing study quality by better detecting issues, enhancing data quality through early identification of data quality trends, accelerating decision making by reducing the time to access and analyze data, and decreasing study costs by reducing time spent reproducing data listings and reports. Spotfire supports various clinical trial roles by providing easy exploration of data and rapid identification of outliers for data managers, intuitive graphics for medical oversight rather than data listings for study leads, and dynamic graphing capabilities for statistic
Evaluating National Malaria Programs’ Impact in Moderate- and Low-Transmissio...MEASURE Evaluation
The framework highlights the importance of routine surveillance data and confirmed malaria incidence for evaluating national malaria programs in low- and moderate-transmission settings. Process evaluations assess program performance and coverage to determine when impact evaluations are needed. Impact evaluations then measure reductions in malaria burden using methods like interrupted time series and constructed controls while accounting for other factors. Key challenges include defining intervention maturity and coverage thresholds needed to achieve measurable impact. The framework emphasizes continuous evaluation along the implementation and impact pathways to guide program decisions.
The document proposes creating a searchable vulnerability assessment database to address several issues:
1) There is currently no way to know about vulnerability assessments being conducted across different regions, species, and ecological elements.
2) New assessments are being undertaken without knowledge of previous relevant assessments.
3) Data and findings from completed assessments are not being shared across organizations.
The database would compile vulnerability assessment efforts into a single location to reduce costs and increase the value of existing assessments by sharing information. It would include assessments from federal, state, and local agencies as well as universities and non-profits.
This document discusses E-TB, an electronic tuberculosis recording system used in Rwanda. E-TB allows for online notification, treatment monitoring, and follow-up of TB patients. It records clinical and laboratory results to track treatment adherence and outcomes. E-TB has been used in Rwanda since 2014, with data entry and management done at health facilities and analysis and reporting at the national level. The benefits of E-TB include easy online information sharing, data reliability, and automated reporting to support decision making. However, weaknesses include reliance on internet connectivity and hardware, as well as need for training of health workers. Improving these areas is recommended to strengthen the E-TB system in Rwanda.
Post Acute Care: Patient Assessment Instrument and Payment Reform Demonstration nashp
Presented at the National Academy for State Health Policy's 20th Annual State Health Policy Conference in Denver, Colorado. Authors: Judith Tobin and Barbara Gage.
Optimising Clinical Trials Monitoring Data review - Neill BarronNeill Barron
This document outlines a strategic approach to optimize trial monitoring and data review. It aims to develop a comprehensive strategy that delivers high quality data and improved study performance through three main workstreams: 1) Focusing on essential data, 2) In-stream remote monitoring, and 3) Targeted and adaptive site monitoring practices. The objectives are to ensure quality through subject protection, protocol execution, and data integrity while improving performance through high enrollment, protocol compliance, and timeline delivery. The strategy proposes shifting monitoring efforts from extensive source data verification to remote monitoring and targeted on-site activities based on site performance. It argues this comprehensive approach can double productivity through quality focus, detection efficiency, and site ownership.
Dr. Kurt Rossow - Disease Mapping for PRRSJohn Blue
Disease Mapping for PRRS - Dr. Kurt Rossow, DVM, Veterinary Diagnostic Laboratory, University of Minnesota, from the 2013 Minnesota Pork Congress, January 16-17, Minneapolis, MN, USA.
More presentations at http://www.swinecast.com/2013-minnesota-pork-congress
On the ground experiences & challenges of a connected diagnostics GxAlert in ...SystemOne
This document summarizes Nigeria's experience implementing the GxAlert connectivity tool for GeneXpert machines. Key points include:
- GxAlert allows GeneXpert machines to report test results in real-time via a secure online network, making results actionable within the health system.
- Over 200 GeneXpert machines in Nigeria are now connected via GxAlert. This enables real-time notification of test results, machine errors, and stock levels.
- Challenges include inconsistent internet availability, incomplete primary data, and a need to better integrate GxAlert with other electronic reporting systems like e-TB Manager and DHIS2. Support is also needed to sustain connectivity and uniquely identify patients.
Clinical data capture involves collecting clinically significant data from subjects in clinical trials. This can be done via paper-based methods like case report forms or via electronic data capture (EDC) methods. EDC involves collecting data electronically and has advantages over paper methods like real-time reporting and faster data processing. Common EDC tools include using the internet, interactive voice response, personal digital assistants, and electronic case report forms (eCRFs). eCRFs allow direct entry of data into an electronic form without paper sources, eliminating errors from transcription.
The document discusses Sri Lanka's methodology for estimating HIV prevalence and projections using the Spectrum software. Key data sources for the model include surveillance data, program statistics on PMTCT, ART, and child treatment. Subpopulations like FSW, MSM, and drug users are modeled separately. Calibration is done to fit curves to available data. Estimated numbers of people living with HIV, new infections, and deaths in Sri Lanka are higher than reported numbers, suggesting the need for improved data and assumptions. The Spectrum software methodology produces national HIV estimates but has limitations for low prevalence settings like Sri Lanka.
The document summarizes a mobile application initiative in Doiwala block, Dehradun, Uttarakhand aimed at ensuring quality family planning services. The application collects client-based information through SMS to create reports on contraceptive users, service centers, and more. It allows easy access to information at all levels from village to state. The mobile-based system improves monitoring, demand calculation, and clinic services.
This document discusses clinical data management (CDM) systems and processes. It defines key terms like source data, source documents, and raw data. It then describes the essential steps in CDM including initial planning, data collection, review and verification, coding, query resolution, data entry and validation, output and archiving. Finally, it outlines requirements for a good CDM system including system validation, security, change control, and archiving. The goal of CDM is to generate an accurate, high-quality clinical trial database while ensuring compliance with regulations.
RHINO Forum: How can RHIS improve the delivery of HIV/AIDS services?MEASURE Evaluation
This document discusses how routine health information systems (RHIS) can be improved to better monitor linkages between HIV/AIDS services and other health services. Integrating separate vertical program reporting systems into a single national RHIS could facilitate client referrals, continuity of care, and achievement of program goals. However, challenges include harmonizing different recording forms and integrating programs not designed to be combined. The discussion forum explores issues around monitoring individual clients versus aggregates, defining linkage indicators, and ensuring data quality when integrating systems.
This document discusses indicators, data sources, and data quality for tuberculosis (TB) monitoring and evaluation. It outlines criteria for good indicators, such as being valid, reliable, specific, and sensitive. Both qualitative and quantitative indicators are described. Factors for selecting indicators include what different levels need to know, available data and resources. Routine TB data collection methods discussed include registers, reports, process monitoring, and special surveys. Ensuring high quality data is important for effective decision making, and standards like validity, integrity, precision, reliability and timeliness are outlined. Common impediments to data quality and steps to improve it, like training and supervision, are also reviewed.
Session 1 Presentation for Volume 1 Service Providers Manual Introduction HMI...CharanjitBasumatary
HMIS is a tool that helps gather, analyze, and use health information to improve health systems performance. It ensures a continuous flow of quality disaggregated health and healthcare services data to assist in local planning, implementation, management, monitoring and evaluation.
Data should be recorded in primary registers during service delivery and aggregated monthly into reporting formats. Each data element should only be entered once to reduce burden and errors. Data flows from facilities to Block, District, State and National levels. Knowledge is created when information is analyzed, communicated and acted upon. Indicators are used to convert data into meaningful information and measure progress toward targets.
This document summarizes the implementation of GxAlert, a system for real-time monitoring of GeneXpert machines and TB/HIV data, in Nigeria. It provides background on Nigeria's high TB and HIV burdens. It describes how over 280 of 377 GeneXpert sites have been connected to GxAlert. Expected benefits include real-time test results and resistance detection, early warnings on machine issues, and timely reporting. Challenges discussed include reluctance to share access, need for user guides and support. The next phase aims to sustain connectivity, integrate across systems, and connect additional diagnostics.
Presentation given at the USAID SQALE Symposium, Bridging the Quality Gap - Strengthening Quality Improvement in Community Health Services, by Prisca Muange on behalf of USAID Assist. http://usaidsqale.reachoutconsortium.org/
ASLM- Alere: Importance of Quality SystemsSystemOne
This document discusses the importance of quality systems and connectivity for decentralized HIV testing. It notes that point-of-care testing presents challenges for quality control compared to centralized laboratories due to harsher environments, less skilled operators, and fewer resources. However, connected diagnostic platforms can help by providing continuous quality monitoring, internal quality control, external quality assessment, and key performance/quality indicators. The document highlights a pilot project in Zimbabwe using an integrated laboratory network for automated quality monitoring and external quality assessment reporting. It concludes that connectivity is essential to ensure successful scale-up of testing and maximize return on investment as testing expands to more facilities.
Healthcare organizations facing migration to a new EHR will want to stop to consider these issues to ensure patient safety, satisfaction and clinical adoption.
The document introduces GSK's approach to implementing a risk-based monitoring (RBM) strategy using a new RBM tool. The tool analyzes data from multiple sources to generate risk indicators and overall risk scores for clinical trial sites. It then produces monitoring activity plans to guide monitors to focus on high-risk sites and activities. The tool aims to make monitoring more data-driven and risk-focused compared to traditional schedule-based approaches. A demo of the tool shows how it can detect changes in sites' risk scores over time and identify sites for further central review and discussion. Some initial challenges in implementation were confusion around the tool's purpose and indicators. Feedback indicated the tool helps prioritize monitoring efforts and drives more data-driven conversations
DIA 2014 Risk Based Monitoring - Neill BarronNeill Barron
This document discusses GSK's implementation of a risk-based monitoring (RBM) strategy using risk assessment tools. It provides 3 key points:
1) GSK uses an RBM technology that provides a consolidated "helicopter view" of risk across study sites, allowing them to drill down into risk signals and drive targeted monitoring interventions.
2) The technology uses core risk-based indicators along with study-specific indicators to identify risks at sites related to performance, data quality, and clinical data. Real study examples show how the data is used to determine required actions.
3) Benefits of the RBM strategy include improved study and data quality from early risk detection and more efficient monitoring. Challenges include
Use of Visualisations to Optimise Clinical Trials - Neill BarronNeill Barron
Spotfire is a data visualization tool that allows clinical trial teams to link and visualize data from multiple sources in one place. It enables early identification of trends, patterns and outliers through graphical visualizations to accelerate decision making and enhance study quality with reduced costs. Key benefits of Spotfire include increasing study quality by better detecting issues, enhancing data quality through early identification of data quality trends, accelerating decision making by reducing the time to access and analyze data, and decreasing study costs by reducing time spent reproducing data listings and reports. Spotfire supports various clinical trial roles by providing easy exploration of data and rapid identification of outliers for data managers, intuitive graphics for medical oversight rather than data listings for study leads, and dynamic graphing capabilities for statistic
Evaluating National Malaria Programs’ Impact in Moderate- and Low-Transmissio...MEASURE Evaluation
The framework highlights the importance of routine surveillance data and confirmed malaria incidence for evaluating national malaria programs in low- and moderate-transmission settings. Process evaluations assess program performance and coverage to determine when impact evaluations are needed. Impact evaluations then measure reductions in malaria burden using methods like interrupted time series and constructed controls while accounting for other factors. Key challenges include defining intervention maturity and coverage thresholds needed to achieve measurable impact. The framework emphasizes continuous evaluation along the implementation and impact pathways to guide program decisions.
The document proposes creating a searchable vulnerability assessment database to address several issues:
1) There is currently no way to know about vulnerability assessments being conducted across different regions, species, and ecological elements.
2) New assessments are being undertaken without knowledge of previous relevant assessments.
3) Data and findings from completed assessments are not being shared across organizations.
The database would compile vulnerability assessment efforts into a single location to reduce costs and increase the value of existing assessments by sharing information. It would include assessments from federal, state, and local agencies as well as universities and non-profits.
This document discusses E-TB, an electronic tuberculosis recording system used in Rwanda. E-TB allows for online notification, treatment monitoring, and follow-up of TB patients. It records clinical and laboratory results to track treatment adherence and outcomes. E-TB has been used in Rwanda since 2014, with data entry and management done at health facilities and analysis and reporting at the national level. The benefits of E-TB include easy online information sharing, data reliability, and automated reporting to support decision making. However, weaknesses include reliance on internet connectivity and hardware, as well as need for training of health workers. Improving these areas is recommended to strengthen the E-TB system in Rwanda.
Post Acute Care: Patient Assessment Instrument and Payment Reform Demonstration nashp
Presented at the National Academy for State Health Policy's 20th Annual State Health Policy Conference in Denver, Colorado. Authors: Judith Tobin and Barbara Gage.
Optimising Clinical Trials Monitoring Data review - Neill BarronNeill Barron
This document outlines a strategic approach to optimize trial monitoring and data review. It aims to develop a comprehensive strategy that delivers high quality data and improved study performance through three main workstreams: 1) Focusing on essential data, 2) In-stream remote monitoring, and 3) Targeted and adaptive site monitoring practices. The objectives are to ensure quality through subject protection, protocol execution, and data integrity while improving performance through high enrollment, protocol compliance, and timeline delivery. The strategy proposes shifting monitoring efforts from extensive source data verification to remote monitoring and targeted on-site activities based on site performance. It argues this comprehensive approach can double productivity through quality focus, detection efficiency, and site ownership.
Dr. Kurt Rossow - Disease Mapping for PRRSJohn Blue
Disease Mapping for PRRS - Dr. Kurt Rossow, DVM, Veterinary Diagnostic Laboratory, University of Minnesota, from the 2013 Minnesota Pork Congress, January 16-17, Minneapolis, MN, USA.
More presentations at http://www.swinecast.com/2013-minnesota-pork-congress
On the ground experiences & challenges of a connected diagnostics GxAlert in ...SystemOne
This document summarizes Nigeria's experience implementing the GxAlert connectivity tool for GeneXpert machines. Key points include:
- GxAlert allows GeneXpert machines to report test results in real-time via a secure online network, making results actionable within the health system.
- Over 200 GeneXpert machines in Nigeria are now connected via GxAlert. This enables real-time notification of test results, machine errors, and stock levels.
- Challenges include inconsistent internet availability, incomplete primary data, and a need to better integrate GxAlert with other electronic reporting systems like e-TB Manager and DHIS2. Support is also needed to sustain connectivity and uniquely identify patients.
Clinical data capture involves collecting clinically significant data from subjects in clinical trials. This can be done via paper-based methods like case report forms or via electronic data capture (EDC) methods. EDC involves collecting data electronically and has advantages over paper methods like real-time reporting and faster data processing. Common EDC tools include using the internet, interactive voice response, personal digital assistants, and electronic case report forms (eCRFs). eCRFs allow direct entry of data into an electronic form without paper sources, eliminating errors from transcription.
The document discusses Sri Lanka's methodology for estimating HIV prevalence and projections using the Spectrum software. Key data sources for the model include surveillance data, program statistics on PMTCT, ART, and child treatment. Subpopulations like FSW, MSM, and drug users are modeled separately. Calibration is done to fit curves to available data. Estimated numbers of people living with HIV, new infections, and deaths in Sri Lanka are higher than reported numbers, suggesting the need for improved data and assumptions. The Spectrum software methodology produces national HIV estimates but has limitations for low prevalence settings like Sri Lanka.
The document summarizes a mobile application initiative in Doiwala block, Dehradun, Uttarakhand aimed at ensuring quality family planning services. The application collects client-based information through SMS to create reports on contraceptive users, service centers, and more. It allows easy access to information at all levels from village to state. The mobile-based system improves monitoring, demand calculation, and clinic services.
This document discusses clinical data management (CDM) systems and processes. It defines key terms like source data, source documents, and raw data. It then describes the essential steps in CDM including initial planning, data collection, review and verification, coding, query resolution, data entry and validation, output and archiving. Finally, it outlines requirements for a good CDM system including system validation, security, change control, and archiving. The goal of CDM is to generate an accurate, high-quality clinical trial database while ensuring compliance with regulations.
RHINO Forum: How can RHIS improve the delivery of HIV/AIDS services?MEASURE Evaluation
This document discusses how routine health information systems (RHIS) can be improved to better monitor linkages between HIV/AIDS services and other health services. Integrating separate vertical program reporting systems into a single national RHIS could facilitate client referrals, continuity of care, and achievement of program goals. However, challenges include harmonizing different recording forms and integrating programs not designed to be combined. The discussion forum explores issues around monitoring individual clients versus aggregates, defining linkage indicators, and ensuring data quality when integrating systems.
This document discusses indicators, data sources, and data quality for tuberculosis (TB) monitoring and evaluation. It outlines criteria for good indicators, such as being valid, reliable, specific, and sensitive. Both qualitative and quantitative indicators are described. Factors for selecting indicators include what different levels need to know, available data and resources. Routine TB data collection methods discussed include registers, reports, process monitoring, and special surveys. Ensuring high quality data is important for effective decision making, and standards like validity, integrity, precision, reliability and timeliness are outlined. Common impediments to data quality and steps to improve it, like training and supervision, are also reviewed.
Session 1 Presentation for Volume 1 Service Providers Manual Introduction HMI...CharanjitBasumatary
HMIS is a tool that helps gather, analyze, and use health information to improve health systems performance. It ensures a continuous flow of quality disaggregated health and healthcare services data to assist in local planning, implementation, management, monitoring and evaluation.
Data should be recorded in primary registers during service delivery and aggregated monthly into reporting formats. Each data element should only be entered once to reduce burden and errors. Data flows from facilities to Block, District, State and National levels. Knowledge is created when information is analyzed, communicated and acted upon. Indicators are used to convert data into meaningful information and measure progress toward targets.
The document discusses methods for assessing the quality of health surveillance data used to monitor disease trends and inform public health programs and policies. It describes key factors that can impact data quality, such as changes in case finding efforts, recording and reporting systems, and case definitions. The document outlines indicators and analytical approaches that can help identify issues with completeness, consistency, and reliability of notification data over time and across regions. This includes checks for unusual fluctuations, variations in notification rates, and consistency of case type proportions. The next steps proposed are to establish data quality review units, conduct in-depth analyses guided by quality checks, and develop online platforms to share best practices.
The document discusses the importance of data quality for monitoring and evaluation systems. It describes seven key dimensions of data quality - accuracy, reliability, completeness, precision, timeliness, integrity and confidentiality. It also outlines the different levels of an M&E system from service sites to national reporting and the roles and responsibilities needed at each level to ensure quality data collection, reporting and use. Tools are presented for strengthening M&E systems and assessing data quality.
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 8 of 11
A monitoring and evaluation system is needed to assess both structural and health sector components of the response to HIV in key populations. It is critical that these systems are practical, not overly complicated, and that they collect information that is current, useful and readily used
Development of health indicators and their measurement with.pptxDr. Nishant Mishra
This document discusses health indicators and their development and measurement. It defines health indicators as summary measures that capture relevant information on different dimensions of health status and performance. It then provides examples of the uses of health indicators, such as to assess health needs, improve decision making, and compare health services between communities. The document outlines criteria for indicators, such as being relevant, scientifically sound, and applicable to users. It also discusses the steps to develop an indicator and components of a well-defined indicator, providing an example. Key issues around comprehensibility are also covered, such as using clear language and providing context to aid understanding.
The document discusses monitoring and evaluation frameworks for EMTCT (Elimination of Mother to Child Transmission) programmes. It notes that M&E systems need to be practical and collect useful, current information. Validation indicators and targets are used to monitor achieving EMTCT goals over time. Data is collected through various tools at the individual, facility, local, national and global levels for reporting and decision making purposes. Accurate record keeping and data management are important for monitoring implementation and evaluation of EMTCT programmes.
Epide 7.ppt epidomology assignment for year oneGetahunAlega
Public health surveillance involves the systematic collection, analysis, and dissemination of health data to monitor disease occurrence and trends. The main types of surveillance are passive, active, and sentinel. Surveillance data is used to determine disease magnitude, set priorities, monitor health events, and evaluate programs. Key features of a good surveillance system include timely notification and comprehensive response. Integrated Disease Surveillance and Response aims to strengthen national surveillance by coordinating activities and ensuring timely data sharing between programs.
The document provides an introduction to indicators for monitoring and evaluating HIV/AIDS programs. It discusses the essential components of an indicator, including having a clearly defined title, definition, purpose, data collection methodology, and interpretation guidelines. Good indicators should be based on quantitative data that can be collected feasibly and provide strategic information on performance, achievement, and accountability. While indicators have limitations, they are useful for comparing programs over time and geography at a high level.
This document provides guidance on data presentation and interpretation for program monitoring and evaluation. It covers choosing appropriate tables and graphs to summarize different types of data, as well as best practices for labeling and interpreting data visually. The key lessons are: use the right type of graph or table to clearly display the data; interpret findings by considering their relevance, potential causes, other relevant data sources, and need for further research; and service data on their own do not show causality, but can help track progress and identify issues.
Me module-3-data-presentation-and-interpretation-may-2TsegayeTesfaye4
This document provides guidance on data presentation and interpretation for program monitoring and evaluation. It covers choosing appropriate tables and graphs for different types of data, best practices for labeling and formatting data visualizations, and tips for interpreting findings to draw meaningful programmatic insights. The key lessons are: use the right data visualization based on the type of data; label all components clearly; and interpret data by making comparisons, considering other information, and identifying implications and areas for further exploration.
This document provides guidance on data presentation and interpretation for program monitoring and evaluation. It covers choosing appropriate tables and graphs to summarize different types of data, as well as best practices for labeling and interpreting data visually. The key lessons are: use the right type of graph or table to clearly display the data; interpret findings by considering their relevance, potential causes, other relevant data sources, and need for further research; and service data on their own do not show causality, but can help track progress and identify issues.
This document provides guidance on data presentation and interpretation for program monitoring and evaluation. It covers choosing appropriate tables and graphs to summarize different types of data, as well as best practices for labeling and interpreting data visually. The key lessons are: use the right type of graph or table to clearly display the data; interpret findings by considering their relevance, potential causes, other relevant data sources, and need for further research; and service data on their own do not show causality, but can help track progress and identify issues.
This document discusses developing and strengthening monitoring and evaluation (M&E) systems for national tuberculosis (TB) programs. It identifies key elements of an effective M&E system and outlines five steps to strengthen implementation: 1) assessing current M&E practices, 2) developing an M&E plan, 3) establishing an M&E unit, 4) implementing the M&E plan, and 5) managing quality control. The document provides guidance on conducting a situation analysis, developing indicators and data collection methods, building M&E capacity, and ensuring quality monitoring and use of data.
Malaria Data Quality and Use in Selected Centers of Excellence in Madagascar:...MEASURE Evaluation
This document summarizes the results of a cross-sectional baseline survey assessing malaria data quality and use in health centers in Madagascar that were selected as Centers of Excellence to improve data practices. The survey found that while reporting completeness and timeliness were high, data accuracy remained an issue. Baseline performance on data quality indicators was similar between the intervention sites that would implement Centers of Excellence and control sites. The implementation of Centers of Excellence aims to drive improvements in data quality, analysis, and use for decision-making in Madagascar.
Improved_Smartcare-ART System Presentation_V6.pptxBetsegaw1
The document provides an overview and outline of a presentation on training for an updated EMR-ART V6.0 software system. Key points include:
- The rationale for improving the existing EMR-ART software to address user complaints, incorporate new HIV/AIDS program features and indicators, and establish data exchange with other systems.
- An overview of the new features in EMR-ART V6.0 including dashboards, patient management, treatment and follow up, viral load tracking, reporting, and data quality assurance.
- A demonstration of the system's capabilities like differentiated service delivery models, appointment scheduling, tracing lost patients, and generating reports for indicators.
The presentation aims to describe the updated
This document provides an introduction to monitoring and evaluation for national tuberculosis (TB) programs. It defines monitoring as the routine reporting of program implementation data, while evaluation assesses program impact at the population level. Monitoring data comes from routine reporting and addresses how well the program is being implemented, while evaluation measures outcomes in the target population and the program's relationship to those outcomes. The document explains that monitoring and evaluation are important for program management by informing operational and funding decisions, ensuring efficient resource use, and meeting stakeholder reporting needs. It provides examples of how monitoring and evaluation data have been applied to track global TB treatment trends, identify funding gaps, and assess the cost-effectiveness of different care models.
2010: Time for Minimum Standards for Health Facilities MEASURE Evaluation
The document discusses establishing minimum standards for health facilities to assess quality of services. It proposes defining core indicators based on accepted standards in domains like infrastructure, staffing, equipment. These indicators would be measured periodically for national-level health facilities through representative surveys. Results would be presented as individual indicators and summary indices to track changes over time, strengthen accountability, and advocate for improved health services. Comparable standards-based data made publicly available can provide evidence for policymakers to commit to better quality healthcare.
Strengthening Information Systems for Community Based HIV ProgramsMEASURE Evaluation
This document discusses strengthening information systems for community-based HIV programs. It describes the components and challenges of community-based HIV information systems. It also summarizes a technical consultation on information systems that presented tools and experiences, and proposed recommendations to fill gaps in community-based HIV information systems. The goal is to provide high quality data that improves programs and facilitates reporting throughout health systems.
Similar to RHINO Forum Presentation on DQR Framework (20)
As countries continue to invest and make strides toward achieving the SDGs and universal health coverage, strong routine health information systems (RHIS) are fundamental to the effort. Well-functioning RHIS provide a wealth of data on a country’s health system, including service delivery, availability of a trained workforce, and reach of interventions, that can be harnessed to identify gaps and support evidence-based decision making. Yet, while many low-to-middle income (LMIC) countries have established a national RHIS structure, there are existing challenges related to the availability, analysis, and use of the data that have yet to be addressed.
As countries continue to invest and make strides toward achieving the SDGs and universal health coverage, strong routine health information systems (RHIS) are fundamental to the effort. Well-functioning RHIS provide a wealth of data on a country’s health system, including service delivery, availability of a trained workforce, and reach of interventions, that can be harnessed to identify gaps and support evidence-based decision making. Yet, while many low-to-middle income (LMIC) countries have established a national RHIS structure, there are existing challenges related to the availability, analysis, and use of the data that have yet to be addressed.
This document outlines steps for conducting a data quality review including: 1) exporting data from a CSPro data entry application and pasting it into an Excel chartbook, 2) inputting analysis disaggregations in the chartbook, and 3) assessing the timeliness, completeness, and verification factors of the data as well as documenting any discrepancies or subnational results.
Information design is both a technical skill and an art form. To design great visualizations requires a diverse range of skill sets and a keen ability to understand the decisions to be made, the data available, the tools and platforms available for visualization design, and how to apply design best practices to create effective visualizations that communicate clearly. Even the most robust routine health information systems face challenges around how to visualize data in a way that facilitates decision-making by key stakeholders.
The document discusses the importance of routine health information systems for monitoring health goals in the post-2015 development agenda. It notes that facility-level data will be the primary source for monitoring 8 of the 26 SDG health indicators. However, current health information systems face challenges like poor data quality, lack of private sector data, and fragmented systems. New opportunities exist with advances in ICT and emphasis on accountability. The Health Data Collaborative aims to enhance coordination and efficiency across partners to strengthen country health information systems. This will help to integrate disease surveillance, align investments, develop standards, and build national capacity in data analysis and use.
This document discusses strengthening routine health information systems (RHIS) through regional networks. It provides background on RHIS and their importance for facility-based and community-based health planning, management, and disease surveillance. However, RHIS in many low and middle-income countries are inadequate due to issues like irrelevant data collection, centralized management, and fragmented disease-specific systems. The Routine Health Information NetwOrk (RHINO) was created to advocate for and improve RHIS performance through activities like workshops, online resources, and capacity building. RHINO also aims to promote the establishment of regional RHIS networks for knowledge sharing and strengthening country investments in public health data systems.
RHINO is a global network that connects people who believe health can be promoted through high-quality and sustainable health information systems. The document provides information on RHINO, including its website and Twitter handle, and asks where RHINO members are located globally.
This document discusses strengthening routine health information systems in Africa through regional collaboration. It reviews where sub-Saharan Africa is currently in terms of health information system development and global trends. It explores existing networks like the African Centre for eHealth Excellence and the HISP network that can be leveraged. Finally, it proposes next steps like consolidating the efforts of these networks to implement a 5-point call to action and developing a monitoring and evaluation framework for peer review across countries.
The document discusses the rationale for strengthening routine health information systems (RHIS) through regional networking in Asia. It outlines the mission, vision, objectives, and governance structure of the Asia eHealth Information Network (AeHIN) RHIS Focus Group, which was established to facilitate sharing of RHIS best practices between countries. The focus group aims to support countries in monitoring health progress, advocate for using RHIS data for decision-making, build RHIS capacity through training, and establish partnerships within and between countries. Future plans include refining a regional RHIS database, setting up communication platforms, and knowledge sharing activities.
The RELACSIS network was established in 2009 to strengthen health information systems in Latin America and the Caribbean through knowledge sharing. It includes academics, public health professionals, and government agencies. RELACSIS coordinates virtual forums and annual meetings to disseminate best practices. It is currently coordinated by PAHO/WHO and has over 4,500 members across 12 areas of interest. The network receives funding from PAHO, USAID, and others to advance its work in areas like data quality and electronic health records.
The document discusses Ethiopia's community-based health information system. It describes how health extension workers play a key role in collecting patient data on family folders and aggregating the data to generate service statistics. The data is reported to health centers and shared with communities. It allows health managers to monitor activities at the community level and identify unusual data patterns. The system was recognized as one of the top 10 USAID health success stories in 2014. It faces ongoing challenges around staff turnover and ensuring continuous capacity building and supervision.
Kickoff webinar slides from the Spring 2016 RHINO forum on health worker information systems, presented by Carl Leitner and Amanda Puckett BenDor from Intrahealth
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
State of Artificial intelligence Report 2023kuntobimo2016
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
2. |
2
Why is health facility data important?Why is health facility data important?
For many indicators it is the only
continuous/frequent source of data
It is most often the only data source that is
available at the subnational level -- important for
equity;
For many key indicators, it is the sole source of
data. For example, PMTCT, ART, TB treatment
outcomes, TB notification, confirmed malaria
cases, causes of death, etc.
3. |
3
Quality of health facility data – why do we care?Quality of health facility data – why do we care?
High-quality data provide evidence to providers
and managers to optimize healthcare coverage,
quality, and services.
High-quality data help:
― Form an accurate picture of health needs, programs, and
services in specific areas
― Inform appropriate planning and decision making
― Inform effective and efficient allocation of resources
― Support ongoing monitoring, by identifying best practices
and areas where support and corrective measures are
needed
4. |
4
Most common problems affecting data qualityMost common problems affecting data quality
Lack of guidelines to fill out the main data sources
and reporting forms
Personnel not adequately trained
Misunderstanding about how to compile data,
use tally sheets, and prepare reports
Un-standardized source documents and reporting
forms
Arithmetic errors during data compilating
Lack of a reviewing process, before report
submission to next level
5. |
5
Many tools have been used to address data qualityMany tools have been used to address data quality
GAVI DQA
WHO Immunization Data Quality Self-assessment
(DQS)
Global Fund/MEASURE Evaluation DQA
RDQA - Self assessment version of DQA (with in-
country adaptations)
Global Fund OSDV
PRISM
WHO Data Quality Report Card (DQRC)
6. |
6
DQR - harmonized approach to assessing and
improving data quality
DQR - harmonized approach to assessing and
improving data quality
The DQR is a multi-pronged , multi-partner framework for
country-led data quality assurance that proposes a
harmonized approach to assessing data quality
It is a framework that builds on the earlier program-
specific quality tools and methods while proposing the
examination of data quality in a more systemic way that
can meet the needs of multiple stakeholders
It is a framework that also includes the examination of
existing facility data (that does not require additional
data collection) missing from earlier tools
It provides valuable information for fitness-for-purpose to
support the Health Sector Strategic Planning Cycle (e.g.
health sector or program reviews)
7. |
7
Why are we recommending a harmonized
approach to data quality?
Why are we recommending a harmonized
approach to data quality?
Data quality is a systems issue - multiple assessments for
different diseases/programs are inefficient and burdensome
for the health system
Can we satisfy the needs in data quality assurance of all
stakeholders with one holistic data quality assessment?
The application of a standard framework to evaluate data
quality enables the understanding of the adequacy of
routine data used for health sector planning – can we link
data quality assessment to health planning efforts?
Permits stakeholders to know that the routine data have
undergone a known minimum level of scrutiny which lends
credibility and confidence in the data
9. |
9
Recommended Core Program IndicatorsRecommended Core Program Indicators
Program Area Indicator Name Full Indicator
Maternal
Health
Antenatal care 1st
visit (ANC1)
Number (%) of pregnant women who attended at least
once during their pregnancy
Immunization DTP3/Penta3 Number (%) of children < 1 year receiving three doses of
DTP/Penta vaccine
HIV/AIDS ART coverage Number and % of people living with HIV who are
currently receiving ART
TB Notified cases of
all forms of TB
Number (%) of all forms of TB cases (i.e. bacteriologically
confirmed plus clinically diagnosed) reported to the
national health authority in the past year (new and
relapse)
Malaria Confirmed malaria
cases
Number (%) of all suspected malaria cases that were
confirmed by microscopy or RDT
13. 13
Completeness and timeliness
― Completeness of reports
― Completeness of data
― Timeliness of reports
Internal consistency
― Accuracy
― Outliers
― Trends
― Consistency between indicators
External consistency
― Data triangulation
― Comparison with data surveys
― Consistency of population trends
External comparisons (population denominators)
Metrics for Data Quality Performance
14. 14
Completeness and Timeliness of Data
This examines the extent to which:
Data reported through the system are available and
adequate for the intended purpose
All entities that are supposed to report are actually reporting
Data elements in submitted reports are complete
Reports are submitted/received on time through the levels
of the information system data flow
15. 15
• Completeness of reports (%) =
# total reports available or received
# total reports expected
• Completeness of indicator data (%) =
# indicator values entered (not missing) in the
report
# total expected indicator values
• Timeliness (%) =
# reports submitted or received on time
# total reports available or received
Completeness and Timeliness of Data
16. 16
Internal Consistency of Reported Data
This dimension examines:
The accuracy of reporting of selected indicators, by
reviewing source documents
Whether data are free of outliers (within bounds), by
assessing whether specific reported values within the
selected period (such as monthly) are extreme, relative to
the other values reported
Trends in reporting over time, to identify extreme or
implausible values year-to-year
The program indicator compared to other indicators with
which they have a predicable relationship, to determine
whether the expected relationship exists between the two
indicators
17. 17
Internal Consistency: Outliers
Metric Severity
Definition
National Level Subnational Level
Outliers
(Analyze
each
indicator
separately.)
Extreme
(At least 3
standard
deviations from the
mean)
% of monthly
subnational unit
values that are
extreme outliers
# (%) of subnational units in
which ≥1 of the monthly
subnational unit values over the
course of 1 year is an extreme
outlier value
Moderate
(Between 2–3
standard
deviations from the
mean, or >3.5 on
modified Z-score
method)
% of subnational
unit values that
are moderate
outliers
# (%) of subnational units in
which ≥2 of the monthly
subnational unit values over the
course of 1 year are moderate
outliers
17
18. 18
Example: Outliers in a Given Year
Dist
Month
Total
Outliers
%
Outliers
1 2 3 4 5 6 7 8 9 10 11 12
A 2543 2482 2492 2574 3012 2709 3019 2750 3127 2841 2725 2103 1 8.3%
B 1184 1118 1195 1228 1601 1324 1322 711 1160 1178 1084 1112 2 16.7%
C 776 541 515 527 857 782 735 694 687 628 596 543 0 0%
D 3114 2931 2956 4637 6288 4340 3788 3939 3708 4035 3738 3606 1 8.3%
E 1382 1379 1134 1378 1417 1302 1415 1169 1369 1184 1207 1079 0 0%
Nat’l 0 0 0 0 2 0 0 1 0 0 0 1 4 6.7%
Months with at least one moderate outlier on the district monthly reports
are shown in red.
19. 19
Metric
Definition
National Level Subnational Level
Trends/
Consistency
over Time
(Analyze
each
indicator
separately.)
Conduct one of the following, based on
indicator’s expected trend:
• Indicators or programs with expected growth:
Compare current year to the value predicted
from the trend in the 3 preceding years
• Indicators or programs expected to remain
constant: Compare current year to the average
of 3 preceding years
# (%) of districts
whose ratio of
current year to
predicted value (or
current year to
average of
preceding 3 years) is
at least ± 33% of
national ratio
Graphic depiction of trend to determine
plausibility based on programmatic knowledge
Internal Consistency: Trends Over Time
20. 20
Example: Trends over Time
District
Year
Mean of
Preceding
3 Years
(2010-
2012)
Ratio of 2013
to Mean of
2010-2012
% Difference
between National
and District Ratios2010 2011 2012 2013
A 30242 29543 26848 32377 28878 1.12 0.03
B 19343 17322 16232 18819 17632 1.07 0.08
C 7512 7701 7403 7881 7539 1.05 0.09
D 15355 15047 14788 25123 15063 1.67 0.44
E 25998 23965 24023 24259 24662 0.98 0.16
National 98450 93578 89294 108459 93774 1.16
Consistency trend: Comparison of district ratios to national ratios
Any difference between district and national ratio that is ≥33% is
highlighted in red.
21. 21
Internal Consistency: Comparing Related Indicators
Metric
Definition
National Level Subnational Level
Consistency
among
related
indicators
Maternal Health: ANC1 - IPT1 or TT1
(should be roughly equal)
# (%) of subnational units where there is
an extreme difference (≥ ± 10%)
Immunization: DTP3 dropout rate =
(DTP1 - DTP3)/DTP1
(should not be negative)
# (%) of subnational units with # of DTP3
immunizations > DTP1 immunizations
(negative dropout)
HIV/AIDS: ART coverage - HIV
coverage (should be <1)
# (%) of subnational units where there is
an extreme difference (≥ ± 10%)
TB: TB cases notified - TB cases on
treatment (should be roughly
equal)
# (%) of subnational units where there is
an extreme difference (≥ ± 10%)
Malaria: # confirmed malaria cases
reported - cases testing positive
(should be roughly equal)
# (%) of subnational units where there is
an extreme difference (≥ ± 10%)
22. 22
Example: Internal Consistency
District ANC1 IPT1
Ratio of
ANC1 to IPT1
% Difference between
National & District Ratios
A 20995 18080 1.16 0.02
B 18923 16422 1.15 0.02
C 7682 6978 1.10 0.07
D 12663 9577 1.32 0.12
E 18214 15491 1.18 0
National 78477 66548 1.18
% difference between ANC1 and IPT1, by district
Districts with % difference ≥10% are flagged in red.
23. 23
External Consistency with Other Data Sources
This dimension examines the level of agreement
between two sources of data measuring the same
health indicator.
The two most common sources of data are:
The routinely collected and reported data from the
health management information system (HMIS) or program-
specific information system
A periodic population-based survey
24. 24
External Consistency: Compare with Survey Results
Examples of
Indicators
Definition
National Level Subnational Level
ANC 1st
visit
Ratio of facility
ANC1 coverage
rates to survey
ANC1 coverage
rates
# (%) of aggregation units used for the most recent
population-based survey, such as
province/state/region, whose ANC1 facility-based
coverage rates and survey coverage rates differ
by at least 33%
3rd
dose
DTP3
vaccine
Ratio of DTP3
coverage rates
from routine data
to survey DTP3
coverage rates
# (%) of aggregation units used for the most recent
population-based survey, such as
province/state/region, whose DTP3 facility-based
coverage rates and survey coverage rates differ
by at least 33%
25. 25
Example: External Consistency
District
Facility
Coverage
Rate
Survey
Coverage
Rate
Ratio of Facility
to Survey Rates
% Difference
between Official and
Alternate
Denominator
A 1.05 0.95 1.10 10%
B 0.93 0.98 0.96 4%
C 1.39 0.90 1.54 54%
D 1.38 0.92 1.50 50%
E 0.76 0.95 0.80 20%
National 1.10 0.94 1.17 17%
Comparison of HMIS and survey coverage rates for ANC1
Differences ≥ 33% are highlighted in red.
26. 26
External Comparison of Population Data
This dimension examines two points:
The adequacy of the population data used in the
calculation of health indicators
The comparison of two different sources of population
estimates (for which the values are calculated differently) to
see the level of congruence between the two sources
27. 27
External Comparison of Population Data
Metric
Definition
National Level Subnational Level
Consistency of
population
projections
Ratio of population projection
of live births from the country
census bureau/bureau of
statistics to a United Nations
live births projection for the
country
NA
Consistency of
denominator
between program
data & official
government
population
statistics
Ratio of population projection
for select indicator(s) from the
census to values used by
programs
# (%) of subnational units
where there is an extreme
difference (e.g., ±10%)
between the 2 denominators
28. 28
External Comparisons of Population Denominators
District
Official Government
Estimate for Live
Births
Health Program
Estimate for Live
Births
Ratio of Official
Government to Health
Program Estimates
A 29855 29351 1.02
B 25023 30141 0.83
C 6893 7420 0.93
D 14556 14960 0.97
E 25233 25283 1.00
National 101560 107155 0.95
Comparison of national and subnational administrative unit ratios of
official government live birth estimates
Administrative units with differences ≥ ±10% are highlighted in red.
29. 29
How do we do the desk review?
A data quality app has been created for DHIS and can be
downloaded by users and applied to their country DHIS
databases
For those that do not have DHIS, an Excel tool has been
developed to support data quality analysis
The principles of the desk review can be applied in any
software that a country has and are not limited to the
tools presented
31. 31
Facility Survey Component of DQR
There are 2 components:
― Data verification – Examines the accuracy of reporting of selected
indicators, by reviewing source documents
― System assessment -- Review adequacy of system to collect, compile,
transmit, analyze, and use HMIS & program data
Survey at 2 levels
― Health facility
― District
32. 32
Accuracy: Data Verification
Quantitative:
Compares recounted to reported
data
Implement in 2 stagesImplement in 2 stages
Assess on a limited
scale if sites are
collecting and
reporting data
accurately and on
time
In-depth verifications at
the service delivery sites
Follow-up verifications at
the intermediate and
central levels
32
34. 34
Accuracy: Verification Factor
Verification Factor
Numerator: Recounted data
Denominator: Reported data
Over-reporting: <100%
Under-reporting: >100%
Suggested range of
acceptability:
100% +/- 10%
(90% –110%)
34
35. 35
Verification factor
Weighted mean of verification ratios
Summarizes information on the reliability of reporting of
the data reporting system
Indicates the degree of over-/under-reporting in the
system
― e.g. VF = 0.80 indicates that of the total reported number of
events, approximately 80% could be verified in source
documents -> over-reporting
36. 36
Verification Factor Example
v
Indicator 1 Indicator 2
Recounted Reported VF Recounted Reported VF
A 1212 1065 1.14 4009 4157 0.96
B 1486 1276 1.16 3518 3686 0.95
C 357 387 0.92 672 779 0.86
D 2987 3849 0.78 1361 1088 1.25
E 4356 4509 0.97 4254 3970 1.07
Data accuracy by district
Indicators flagged in red are verification factors ≥ ±10% of 1.
36
38. 38
Data verification
Recommended maximum 5 indicators for review
—ANC1, DTP3/Penta 3, ART coverage, TB cases, malaria
cases (confirmed)
Select a time period for the verification (3 months)
— e.g. : July, August, September 2016
For each indicator:
—Review the source documents and reports
—Recount the number of events
—Compare the recount to the reported events
—Determine reasons for any discrepancies
39. 39
System Assessment Indicators
Indicator
Level
Facility District
Presence of trained staff X X
Presence of guidelines X X
No recent stock out of data collection tools X X
Recently received supervision and written feedback X X
Evidence of analysis and use data X X
40. 40
If the sampling permits it,
system assessment findings can be disaggregated by strata
41. 41
Are there tools to support data verification
and system assessment?
A paper-based questionnaire is available that can be adapted to a
country situation
A data collection program has been developed in CSPro for tablets
An analysis tool has been developed in Excel for support the analysis
of the data collected during this exercise
47. 47
Input Quality Thresholds
Recommended User-defined
Domain 1: Completeness and Consistency of Reporting/Indicator Data Col 1 Col 2
1a
1a1a 75%
1a1b 75%
1a2a 75%
1a2b 75%
1a3a 75%
1a3b 75%
1a4a 75%
1a4b 75%
1b
Program Area 1: Maternal_Health
1b1 Indicator 1: ANC 1st Visit 90%
Program Area 2: Immunization
1b2 Indicator 1: 3rd dose DPT-containing vaccine 67%
Program Area 3: HIV_AIDS
1b3 Indicator 1: Number of HIV+ persons currently on ART 90%
Program Area 4: Malaria
1b4 Indicator 1: Number of confirmed malaria cases reported 90%
Program Area 5: TB
1b5 Indicator 1: Number of Notified TB cases (all forms of TB) 75%
Completeness of Region Level Reporting
Completeness of Indicator Reporting: % of data elements that are non-zero values; % of data elements
that are non-missing values
Timeliness of Region Level Reporting
Completeness of District Level Reporting
Threshold
Completeness and Timliness of Reporting from Health Facilities and Aggregation Levels: District, Region,
Province
Completeness of Province Level Reporting
Timeliness of Province Level Reporting
Timeliness of District Level Reporting
Completeness of Health Facility Level Reporting
Timeliness of Health Facility Level Reporting
Quality Thresholds:
'Quality thresholds'are the values that set the limits of acceptable error in data reporting. The analyses in the DQR
compare results to these thresholds to judge the quality of the data. Recommended values are included for each
metric in column 1. User-defined thresholds can be input into col 2 which will take precedence over the values in
col 1.
52. 52
Summary Dashboard
No. Indicator Definition
National Score
(%)
# of districts not
attaining quality
threhold
% of districts not
attaining quality
threshold
1a
Completeness of District
Reporting
National district reporting completeness rate and
districts with poor completeness of reporting
99.1% 1 2.1%
1b
Timeliness of District
Reporting
National district reporting timeliness rate and districts
with poor timeliness of reporting
90.3% 3 6.4%
1c
Completeness of Facility
Reporting
National facility reporting completeness rate and districts
with poor facility reporting completeness
96.1% 8 17.0%
1d
Timeliness of Facility
Reporting
National facility reporting timeliness rate and districts
with poor facility reporting timeliness
89.6% 11 23.4%
Maternal_Health - ANC 1st Visit 98.9% 1 2.1%
Immunization - 3rd dose DPT-containing vaccine 99.3%
HIV_AIDS - Number of HIV+ persons in palliative care 99.8%
Malaria - Number of confirmed malaria cases reported 99.8%
Immunization - OPV3 98.9% 1 2.1%
Multi-program - Penta 1st doses 99.1%
Maternal_Health - ANC 1st Visit 98.9% 2 4.3%
Immunization - 3rd dose DPT-containing vaccine 99.5%
HIV_AIDS - Number of HIV+ persons in palliative care 100.0%
Malaria - Number of confirmed malaria cases reported 99.5% 1 2.1%
Immunization - OPV3 100.0%
Multi-program - Penta 1st doses 100.0%
1f.1
Consistency of Reporting
Completeness - District
Reporting
Consistency of district reporting completeness and
districts deviating from the expected trend
105.8%
1f.2
Consistency of Reporting
Completeness - Facility
Reporting
Consistency of facility reporting completeness and
Districts deviating from the expected trend
103.5%
DOMAIN 1: COMPLETENESS OF REPORTING
BURUNDI - ANNUAL DATA QUALITY REVIEW: RESULTS, 2016
DOMAIN 2: INTERNAL CONSISTENCY OF REPORTED DATA
Completeness of indicator
data (missing values)
1e.1
1e.2
Indicator 1: Completeness and timeliness of reporting
Indicator 1e: Completeness of indicator data - presence of missing and zero values
Indicator 1f: Consistency of reporting completeness over time
Completeness of indicator
data (zero values)
54. 54
Domain 1: Completeness of Indicator Data
National
score
Program Area and Indicator
Quality
Threshold
Type % No. % Name
Missing 98.9% 1 2.1%
Zero 98.9% 2 4.3%
Missing 99.3% 1 2.1%
Zero 99.5% 1 2.1%
Missing 99.8%
Zero 100.0%
Missing 99.8%
Zero 99.5% 1 2.1%
Missing 98.9% 1 2.1%
Zero 100.0%
Missing 99.1% 1 2.1%
Zero 100.0%
Missing 99.3% 4 8.5%
Zero 99.6% 4 8.5%
Malaria - Number of confirmed malaria
cases reported
<= 90%
-
Multi-program - Penta 1st doses <= 90%
District 21
-
Indicator 1f: Consistency of Reporting Completeness
Immunization - OPV3 <= 90%
District 7
-
Total (all indicators combined)
HIV_AIDS - Number of HIV+ persons in
palliative care
<= 90%
-
-
Indicator 1e: Completeness of Indicator Reporting - Presence of Missing and Zero Values
2016
Maternal_Health - ANC 1st Visit
District 44
District 17, District 29
<= 90%
Districts with > user-defined % of zero or missing values
Immunization - 3rd dose DPT-
containing vaccine
<= 90%
District 19
District 17
Interpretation of results: Indicator 1e
•
•
•
•
55. 55
Domain 2: Internal Consistency - Outliers
National
score
% No. % Name
0.2% 1 2.1%
0.0%
0.5% 3 6.4%
0.0%
0.4% 2 4.3%
0.0%
0.2%
Indicator 2a.1: Extreme Outliers (>3 SD from the mean) 2016
Malaria - Number of confirmed malaria cases reported
Immunization - OPV3
Multi-program - Penta 1st doses
Total (all indicators combined)
DOMAIN 2: INTERNAL CONSISTENCY OF REPORTED DATA
-
Districts with extreme outliers relative to the mean
District 39
Indicator 2a: Identification of Outliers
-
District 31, District 39
-
Program Area and Indicator
District 9, District 38, District 41
Maternal_Health - ANC 1st Visit
Immunization - 3rd dose DPT-containing vaccine
HIV_AIDS - Number of HIV+ persons in palliative care
Interpretation of results - Indicator 2a1:
•
•
•
•
•
•
56. 56
Domain 2: Consistency over time
Quality threshold
National score (%)
Number of districts with divergent scores
Percent of districts with divergent scores
Names of districts with divergent scores:
20%
Expected trend Increasing
Compare districts to: expected result
2b3: Consistency of 'General_Service_Statistics -
OPD Total Visits' over time
Year 2014
100%
5
38.5%
District 6, District 7, District 8, District 9, District 11
0
100,000
200,000
300,000
400,000
500,000
600,000
0 100,000 200,000 300,000 400,000 500,000
General_Service_Statistics-OPDTotalVisits
eventsforyearofanalysis
Forcasted General_Service_Statistics - OPD Total Visits value for
current year based on preceding years (3 years max)
0
1,000,000
2,000,000
3,000,000
2011 2012 2013 2014
Trend over time: General_Service_Statistics -OPD Total Visits
Interpretation of results - Indicator 2c3:
•This indicator is increasing over time (Outpatient visits are
increasing - something we were expecting given social mobiliation for
public health services.
•Comparison of expected result (that the forecasted value is equal to the
actual value for 2014) yeilds 5 districts with ratios that exceed the
quality threhold of 20%. 3 are inferior of the quality threshold while 2
are greater.
• Errors are not systematic (e.g. all in one direction) Review district
outpatient registers in affected districts to confirm reported values.
57. 57
Domain 2: Consistency between related indicators
Percent of districts with divergent scores 15.4%
Names of districts with divergent scores:
District 5, District 6
Indicator 2c: Internal Consistency - Consistency Between Related Indicators
Consistency between related indicators - Ratio of two related indicators and Districts with ratios significantly different from the
national ratio *
2c1: Maternal Health Comparison: ANC 1st Visit :
IPT 1st Dose
Year 2014
Expected relationship
National Score (%) 114%
Number of districts with divergent scores 2
equal
Quality Threshold 10%
Compare districts with: national rate
Interpretationof results - Indicator 2c1:
• Data seem pretty good - only district 5 has a largely discrepant value
• IPT seens consistently lower than ANC1 - more pregnant women should be receiving IPT
• Stock out of fansidar in Region 2 could explain low number of IPTin Districts 5 . Call DHIO in these districts to
investigate
•National rate is 114% - most districts are close to this value. District 6 is performing well relative to the other districts
but is 'discrepant' relative to the national rate. - no follow up needed.
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
ANC1eventsforyearofanalysis
IPT 1st Dose eventsfor year of analysis
Scatter Plot: ANC 1st Visit : IPT1st Dose(Districts compared tonational
rate)
59. 59
Domain 4: Consistency of population data –
Comparison of denominators in use in-country
Names of districts with divergent scores:
District 1, District 5, District 7, District 12
National Score (%) 106%
Number of districts with divergent scores 4
Percent of districts with divergent scores 30.8%
Indicator4b: Consistency of denominatorbetween program data and official government population statistics
Indicator4b1- Comparing the official Live Births
denominator to aprogram denominator, if
applicable
Year 2014
Quality Threshold 10%
0.00
10000.00
20000.00
30000.00
40000.00
50000.00
60000.00
70000.00
ProgramdenominatorforLiveBirths
Official government denominator for Live Births
Interpretationof results- Indicator 4b1:
• the Program denominatorsin Districts1, 7, and 12 seemtoo large - and too small inDistrict 5. Review growth rates
used by program to estimateintercensal yearly values forlivebirths.
•
Some tools are still in existence while others are no longer is use. And, now, yet another tool/framework is being thrown at the audience. What is the DQR offering that these other tools did not offer?
What is the DQR
The DQR Framework includes 3 components:
The data quality review process should start with regular, routine, monthly
review of data quality with feedback by each level of the health system. This process can be integrated, reviewing the quality of data from multiple programs.
As shown in the previous slide, there should be an annual review of data quality at national level preceding the Annual Review. This annual review can also be integrated.
Periodically, perhaps each 3 to 5 years there should be in-depth reviews of the quality of data for particular programs such as immunization, MCH, HIV and malaria.
Four domains of data quality are defined:
Completeness and timeliness
Internal consistency -- do the routine data agree with each other?
External consistency – do the routine data agree with survey findings?
Are the denominator estimates consistent with one another
Accuracy: Measured against a reference and found to be correct
Completeness: Present, available, and usable
Timeliness: Up-to-date and available on time
This slide shows how to measure reporting performance to determine the extent to which data reports are appropriately available, complete, and timely.
Outliers = Deviation from the mean
The table shows moderate outliers for a given indicator. There are four identified moderate outliers. They are highlighted in red. Three of the districts have at least one occurrence of a monthly value that is a moderate outlier.
Nationally, this indicator is a percentage of values that are moderate outliers for the indicator. The numerator for the equation is the number of outliers across all administrative units [in this case, 4]. The denominator is the total number of expected reported values for the indicator for all the administrative units. That value is calculated by multiplying the total number of units (in the selected administrative unit level) with the expected number of reported values for one indicator for one administrative unit. In this case, we have 5 districts and 12 expected monthly reported values per district for one indicator, so the denominator is 60 [5 × 12]. Thus, about 6.7% are moderate outliers [4/60 = 0.0666 × 100, or 6.7 %].
Subnationally, see if you can calculate the number of outliers for each district. Count the districts where there are two or more outliers (for moderate outliers) among the monthly values for the district [1]. Divide by the total number of administrative units [1/5 = 0.25 × 100 = 25%].
Mean of preceding three years (2010, 2011, and 2012) is 93,774 [98,450 + 93,578 + 89,294)/3]
Ratio of current year to the mean of the past three years is 1.16 [108,459/93,774 ≈ 1.16].
The average ratio of 1.16 shows that there is an overall 16% increase in the service outputs for 2013 when compared to the average service outputs for the preceding three years of the indicator.
Subnationally, try to evaluate each district, by calculating the ratio of the current year (2013) to the average of the previous three years (2010, 2011, and 2012). For example, the ratio for District 1 is 1.12 [32,377/28,878].
Then calculate the % of difference between the national and district ratios for each district. For example, for district A:
|(𝐷𝑖𝑠𝑡𝑟𝑖𝑐𝑡 1 𝑅𝑎𝑡𝑖𝑜 − 𝑁𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑅𝑎𝑡𝑖𝑜)/(𝑁𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑅𝑎𝑡𝑖𝑜)| = |(1.121−16)/1.16| = 0.03 = 3.0%
The difference between the district ratio and the national ratio for District A is less than 33%. However, there is a difference of approximately 44% for District D between the deliveries ratio and the national ratio.
To calculate this indicator subnationally, all administrative units whose ratios are different from the country’s ratio by ±33%, or more are counted. In this example, only District D has a difference greater than ±33%. Therefore, 1 out of 5 districts (20%) has a ratio that is more than 33% different from the national ratio.
The annual number of pregnant women started on antenatal care each year (ANC1) should be roughly equal to the number of pregnant women who receive intermittent preventive therapy for malaria (IPT1) in ANC, because all pregnant women should receive this prophylaxis. First, we will calculate the ratio of ANC1 to IPT1 for the national level, and then for each district. At the national level, the ratio of ANC1 to IPT1 is about 1.18 [78,477/66,548].
At the subnational level, we can calculate the ratio of ANC1 to IPT1 and the % difference between the national and district ratios.
We see that there is one district (D) with a ratio of ANC1 to IPT1 greater than 20%. We also see that the % difference between the national and district ratios for district D is more than 10%.
Population-based surveys: Demographic and Health Survey (DHS); MICS, etc.
Indicator values are based on recall, referring to period before the survey (such as 5 years)
Sampling error: confidence intervals
If the HMIS is accurately detecting all ANC visits in the country (not just those limited to the public sector), and the denominators are accurate, the coverage rate for ANC1 derived from the HMIS should be very similar to the ANC1 coverage rate derived from population surveys. However, HMIS coverage rates are often different from survey coverage rates for the same indicator.
At the national level:
The coverage rate from HMIS is 110%.
The coverage rate from the most recent population-based survey is 94%.
The ratio of the two coverage rates is: 1.17 [110%/94%].
If the ratio is 1, it means that the two coverage rates are exactly the same.
If the ratio is &gt;1, it means that the HMIS coverage is higher than the survey coverage rate.
If the ratio is &lt;1, it means that the survey coverage rate is higher than the HMIS coverage rate.
The ratio of 1.17 shows that the two denominator values are fairly different, and there is about a 17% difference between the two values.
At the subnational level, the ratio of denominators is calculated for each administrative unit. Districts with at least 33% difference between their two denominators are flagged. Districts C and D have more than 33% difference between their two ratios.
This slide shows the ratio of the number of live births from official government statistics nationally for the year of analysis to the value used by the selected health program.
Calculate the ratio of subnational administrative unit 2014 live births to the value used by the selected health program; district B has a difference of 0.17 or 17%.
This chart shows that the trace-and-verify protocol starts with data at the level of service delivery. Data are then “traced” to the “intermediate aggregation level” (in this case a district), and then to the central level.
At the heart of the data quality (DQ) process are two important components: data verification (DV) and report performance.
Data verification is derived through quantitative comparison of recounted data and reported data.
The verification factor (VF) is calculated by dividing the recounted number by the reported number, giving a percentage.
*Facilitators: Ask participants, “What would 85% mean? How about 125%?”
Refer to Module 9: RHIS Performance Assessment, concerning the PRISM tools. The diagnostic tool also measures data quality for district and health facility levels.
Recounted data/reported data = VF (verification factor)
When VF is ≥ ±10%, data are considered inaccurate.
This graph shows the verification factors for four indicators
First, we can see that there is a wide variation in the accuracy of these indicators. The area marked with red horizontal lines shows a margin of acceptability: plus or minus 10% of 100%, the global standard. However, individual programs can select their own ranges of acceptability, as deemed appropriate.
We also can see that, of the four indicators, three are outside the acceptable margins. [Ask participants:] What would you say about indicators 1 and 4? [After someone answers, CLICK here to animate.]
What about indicator 2? [After someone answers, CLICK here to animate.]
Ideally, we would see no under-reporting or over-reporting of data, with indicators as close to 100% as possible. [CLICK here to animate.]
Objective
To assess the adequacy of information system to collect, compile, transmit, analyze, and use HMIS & program data
On this tab you will enter the parameters for the analysis. Data flow model, periodicity of reporting and the specific levels of the health system selected for the analysis.
1. Select Country: The Country selected will automatically be included in dashboards of results, as well as being used to calculate the UN population projection for Live Births.
2. Select year: This is the year of analysis, the year for which data will be obtained and analyzed.
3. Complete the data flow model for the Country HMIS (or Programme, depending on the scope of the DQR). Include all levels of the reporting system where data are collected, aggregated, and forwarded to the next higher level. The last box should indicate the &apos;National&apos; level.
4. Select the level of the reporting system for which you are conducting your analysis, that is, the level for which metrics are calculated and compared. This is usually the level for which data are input, such as the district level.
5. Select the periodicity (i.e. how often the data are reported) for the level of analysis selected. This selection will configure the indicator data entry pages for the periodicity selected. Remember also to select the first period of the reporting year (input 10). The selection of the periodicity of reporting for the level of analysis will populate the drop down list in input 10.
6. Input the periodicity of reporting from health facilities. This is used in the evaluation of reporting performance from facilities (domain 1 - completeness and timeliness of reporting).
7. Input the periodicity of reporting from the 1st level of aggregation (usually the district). This is used in the evaluation of reporting performance from the 1st level of aggregation (domain 1 - completeness and timeliness of reporting).
8. Input the level of the reporting system for which you are inputting service output data. These are the indicator values by month or quarter. These data can be facility level (only rarely in the event that facility level data are entered into the computer), or district, or regional level depending on what aggregate level data are available at national level.
9. Input the level of the most recent population-based survey. In domain 3 - External comparison, routinely reported data values will be compared with survey values. The routine data will need to be aggregated to the level of the survey (typically the regional level) so that the values are comparable.
10. Enter the first period of the year of analysis. Depending on the periodicity of reporting from the level selected for analysis (in #5) the drop down list will provide the range of options. Select the first period (e.g. 1st quarter, the month of January etc.) from the drop down.
11. Enter the nature of facility reporting, either integrated (e.g. on the monthly form HMIS) or program-specific. Integrated reporting means the results from different health programs are all reported on the same form, and only that form is forwarded to the next level to satisfy reporting requirements for all health programs. Program-specific reporting means that health programs report to the next level separately, each program with its own set of reporting forms. Since it may be the case that reporting from health facilities is only partially integrated, selection of the type of reporting on the Input_basic_info tab will only hide or reveal the program-specific reporting data entry and results areas. The integrated reporting tab and result areas will always be available to enter information on reporting for the HMIS in general.
Input the relevant administrative units in column 2 on the Input_admin_units tab. These are the administrative units for the level of analysis selected on the Basic Information tab.
The DQR has a standard set of indicators from across program areas that are intended to provide a cross-cutting assessment of data quality. These are:
▪ Maternal health - Antenatal care 1st visit (ANC1)
▪ Immunization - DTP3/Penta3
▪ HIV/AIDS - ART coverage
▪ TB - Notified cases of all forms of TB
▪ Malaria - Confirmed malaria cases
However, the DQR is designed to accommodate any program areas and indicators. On the &apos;Program Areas and Indicators&apos; tab select program areas and their associated indicators using the drop down lists provided. One primary indicator should be selected for each program area. The primary indicator is listed as #1 in the two spaces provided for each program area. The secondary indicator (#2) is only used for the Internal Consistency metric &apos;Comparison between related indicators&apos;.
Drop down lists for program areas and indicators include the standard indicators used for the recommended implementation of the DQR, as well as a supplemental list of alternative indicators for each program area. Information on the core and alternative indicators can be found in the DQR Technical Guide (Module 3: Review of data quality through a health facility survey; Annex 1 - Recommended Indicators).
It is also possible to include user-defined program areas and indicators by selecting &apos;Other_specify&apos; from the drop down list. Another field will appear in which the user-defined program area and/or indicator can be entered. Once entered the program area and indicator names auto-populate the dashboards of results in the DQR.
Finally, a section for selecting the indicator type, either cumulative or current, is included. A cumulative indicator is one for which monthly values are added to the previous month&apos;s value to derive a running total (e.g. number counseled and tested for HIV). A current indictor is one where the current month&apos;s value updates or replaces the previous month&apos;s value (e.g. current on ART where lost, stopped, transferred out or died are all subtracted from the total, new patients are added, and those counted this month were most likely also counted last month). The default value is cumulative since most indicators are cumulative.
To judge the quality of data using the metrics in the DQR it is necessary to define benchmarks of quality with which to compare the results. WHO has recommended thresholds for each metric which can be found on the &apos;Quality Thresholds&apos; tab. Often, global standards are not relevant in a given country if the information system is immature, or is undergoing reform. In cases where the recommended thresholds are inappropriate, user-defined thresholds can be supplied by entering the values in column 2 on the &apos;Quality Thresholds&apos; tab which will override the recommended thresholds.
On the &apos;Input_reports_received&apos; tab enter the information required on completeness and timeliness of reporting from subnational units. Depending on the data flow model input in the Basic Information tab, you will need to enter data on the number of reports received for each level, and historically (3 prior years). Also required is information on the number of reports received by the deadline of reporting for the year of analysis. Please ensure to select the appropriate periodicity of reporting on the Basic Information tab for facility reporting and from the next higher level (#7) so the DQR tool will know the number of expected reports in the calculation of completeness of reporting.
The DQR evaluates the adequacy of population data (i.e. denominators) used to calculate coverage rates for performance monitoring in &apos;Domain 4 - Consistency of population data&apos;. Denominator data is also required to compute rates for comparisons of routine data to population-based survey data (&apos;Domain 3 - External consistency&apos;). There are two tabs in which input of population data are required, one for each domain. On the tab &apos;Input_Standard_Populations&apos; (Figure 12) enter the populations from Official Government Statistics (e.g. from the National Statistics Bureau) by the level selected for analysis (e.g. district) for Live Births, Expected Pregnancies, and Children &lt; 1 year of age (columns F-H). These denominators will be compared to the same populations used by health programs, if applicable. If health programs are using their own estimates of these populations enter the values by the level selected for analysis into the appropriate cells (columns I-K).
To evaluate &apos;Internal consistency - Consistency of indicator data over time&apos; (Domain 2) you will need to enter annual values for the level selected for analysis for the DQR primary indicators (selected on the &apos;Program Areas and Indicators&apos; tab). Annual values for the indicators are required for the three years prior to the analysis year. The annual values for the prior years need to be pasted into the appropriate columns for each of the indicators, while the values for the year of analysis are aggregated automatically by the DQR tool once the monthly values have been input into the indicator data tabs
Paste monthly (or quarterly) data by level selected for analysis into the indicator data tabs. The indicator names should appear automatically at the top of each of the indicator data tabs once the indicators are selected on the &apos;Program Areas and Indicators&apos; tab. The indicator data tabs are named according to the following logic: PA1 is Program Area #1, while Ind1 is the primary indicator for the program area. Each Program Area selected on the &apos;Program Areas and Indicators&apos; tab has two indicators, a primary and a secondary indicator. The primary indicator is the indicator for which DQR metrics are calculated. The secondary indicator is only used for the &apos;Domain 2 - Internal consistency&apos; evaluation of the consistency between related indicators. Furthermore, PA2 is Program Area #2, which has Ind1 and Ind2, etc.
Please ensure that the periodicity of reporting for the level of analysis is indicated in #5 on the Basic Information tab. This selection will configure the Indicator Data tabs for 12 columns for monthly reporting, and 4 columns for quarterly reporting.
In Domain 2 – Internal Consistency of Reported Data extreme and moderate sub-national unit values are identified for monthly (or quarterly) reporting. These values are highlighted on the Input Indicator Data tabs by color coding as follows: outliers are noted by a stippling pattern and shaded gray for moderate outliers, and shaded pink for extreme outliers (Figure 16). These values are summarized and the sub-national units where they occur are identified in the summary tabs for Domain 2.
The tab &apos;Summary_dashboard&apos; displays results for all DQR domains and metrics in summary form, without detail or graphics. The standard form for results is the value of the metric plus the number and percent of subnational units which do not attain the established benchmark for the metric. The subnational units which do not attain the standard are listed on the domain-specific dashboards.
Add interpretations in the text box to facilitate action planning using assessment results
Completeness of indicator data - measures the percentage of missing or zero values reported from subnational units
▪ Identification of extreme outliers - monthly (or quarterly) values entered for subnational units selected as the level of analysis are examined for the presence of extreme outliers, i.e. values that are ≥ 3 standard deviations from the mean of monthly (or quarterly) values entered for subnational units. For each primary indicator entered on the &apos;Program Areas and Indicators&apos; tab, the number and percentage of values that are extreme outliers is calculated and the subnational units identified (
Consistency over time - The plausibility of reported results for selected programme indicators are examined in terms of the history of reporting of the indicators. Trends are evaluated to determine whether reported values are extreme in relation to other values reported during the year or over several years (Figure 22).
-For this metric the annual value of primary indicators for the year of analysis (aggregated from monthly or quarterly values entered for subnational units) is compared to the mean of annual values for the three years proceeding the year of analysis. Subnational units with a ratio of the annual value for the year of analysis to the mean of the annual values from the 3 preceding years divergent from the expected ratio (or national ratio) more than the recommended (or user defined) quality threshold are identified, and the number and percent of such subnational units is calculated.
Consistency between related indicators - Programme indicators which have a predictable relationship are examined to determine whether, in fact, the expected relationship exists between those indicators. In other words, this process examines whether the observed relationship between the indicators, as depicted in the reported data, is that which is expected (Figure 23).
For this metric, annual aggregate values for primary indicators are compared to annual aggregate values for secondary indicators input into the program area specific Indicator Data tabs. A ratio of the primary indicator to the secondary indicator is calculated and compared to the national ratio of the same two indicators, or to the expected value of the ratio of the two indicators. The expected value is the value of the ratio when the two indicators are equal, or for a ratio, the value of 1.
Data for recent population-based surveys are entered in the &apos;External_Data_Sources&apos; tab. Routine data entered for primary indicators are aggregated to the administrative units of the survey as indicated on the &apos;Survey_Mapping&apos; tab. The routine data value for the appropriate survey administrative units are then divided by the population value, also aggregated to the survey administrative unit to derive a rate comparable to the survey value for the same administrative unit. The ratio of the routine value to the survey value is then calculated. Subnational units with a ratio greater than 1 + the recommended (or user-defined) quality threshold (or less than 1 - the quality threshold) are flagged as potential data quality problems.
In the graphics in the indicator-specific dashboards and the Domain 3 - External Consistency dashboard, the routine values are depicted as bars, while the survey values are depicted as points (a triangle) with error bars based on the standard error of the survey estimate (entered in the &apos;External_Data_Sources&apos; tab) depicting the range of acceptable error between the survey and the routine values.
The level of congruence between the denominators from official government sources and those used by Health Programs is evaluated by calculating the ratio between the two values for subnational units. Subnational units with a ratio greater than 1 + the recommended (or user-defined) quality threshold, or less than 1 - the quality threshold are flagged as potential data quality problems. The denominator-specific dashboards in the &apos;Domain 4 - External Consistency of Population Data&apos; dashboard provide a scatter plot depicting the relationship between subnational unit values of the two denominators (Figure 26). Points falling outside the dashed gray lines indicate values that exceed the quality threshold.
Enter Country and Year in the Yellow boxes
By design, the data are collected in the standard CSPro Data Verification application which can used for data capture in the field on tablet computers, or from entering paper-based results into the desk top back in the office after the assessment.
Once the data are in the database and cleaned, a ‘batch file’ is available to compile the relevant indicators for the analysis. Run the batch file in CSPro to compile the indicators, then export the data as a text file. From there the data can be pasted into the Excel tool.
Paste the data from CSPro (open the text file in Excel) into the DV Chartbook on the Indicators tab.
NB -there are facility and district level versions of the CSPro data entry applications and batch files. There are also facility and district level versions of the Chartbook.
Enter the country specific information on stratifiers – subnational units, facility types, management authority and urban/rural.
Results are presented for all program level indicators and for each of the program indicators individually. General facility information page gives information on the availability of services in the facilities surveyed.
-results are provided for national level (that is, all subnational units taken together), and also broken down by subnational unit (e.g. region or district).
There is more detail on data quality metrics in the indicator-specific tabs (e.g. % of facilities over/under reporting by &gt;10%).
System assessment indicators (qualitative indicators) are color coded for ease of interpretation. Also ‘national’ and ‘subnational’ results tabs.