The document discusses epidemiologic measures used to quantify disease in populations. It introduces the concepts of incidence and prevalence as key measures. Incidence refers to the frequency of new cases occurring in a population over time. Prevalence refers to all existing cases in a population at a given point in time, including both new and existing cases. Quantifying disease through measures like incidence and prevalence allows epidemiologists to study how diseases affect populations.
Incidence refers to the rate of new cases of a disease occurring in a population over a specified period of time. It is calculated by taking the number of new cases (the numerator) divided by the total person-time at risk (the denominator) and multiplying by a standard rate. The denominator considers the total time each person in the population is observed and disease-free. Incidence provides information about the risk of developing a new disease and is used to compare disease burden between populations or time periods.
The document defines key epidemiological measures used to describe disease occurrence and impact, including prevalence, incidence, rates, and ratios. It provides examples of how to calculate and interpret these measures. The document concludes that prevalence describes the current disease burden, while incidence provides information on the risk of developing disease over time and is thus better suited for etiological studies.
Introduction to epidemiology and it's measurementswrigveda
Epidemiology is defined as the study of the distribution and determinants of health-related states or events in specified populations. It has three main components - distribution, determinants, and frequency. Measurement of disease frequency involves quantifying disease occurrence and is a prerequisite for epidemiological investigation. Rates, ratios, and proportions are key tools used to measure disease frequency and distribution. Incidence rates measure new cases over time while prevalence rates measure existing cases. These measurements are essential for describing disease patterns, formulating hypotheses, and evaluating prevention programs.
This lecture introduces epidemiology by discussing its importance through achievements like smallpox eradication. It defines epidemiology as the study of health-related states in populations to prevent and control health problems. The lecture describes John Snow's contribution by showing cholera is spread through contaminated water before germ theory. It outlines the epidemiological approach as asking questions to make comparisons, and aims of epidemiology as describing disease distribution, identifying causes, and providing data for prevention planning.
The document discusses approaches for studying disease etiology, including observational studies like ecological, cohort, and case-control studies as well as randomized trials. It also examines how evidence for a causal relationship between a factor and disease has been established through a sequence of studies, from initial clinical observations to randomized trials. Key figures in establishing causal relationships for various diseases are also mentioned, such as Alton Ochsner's work linking smoking to lung cancer and Barry Marshall and J. Robin Warren's discovery of H. pylori's role in peptic ulcers. Guidelines for determining causation, such as those from the Surgeon General and Bradford Hill, are also reviewed.
Epidemiology lecture 2 measuring disease frequencyINAAMUL HAQ
This document discusses measuring disease frequency in epidemiology. It defines key terms like incidence, prevalence, population at risk, and rates. Incidence refers to new cases in a specified time period, while prevalence looks at total current cases. Prevalence can be point prevalence (at a point in time), period prevalence (over a specified time period), or lifetime prevalence. The document provides examples of calculating prevalence from population data and discusses how prevalence is used to understand disease burden and plan health services.
This document provides an overview of epidemiology, including its basic concepts, principles, scope, and measurement tools. Some key points:
- Epidemiology is the study of disease distribution and determinants in populations, and is used to prevent and control health problems. It describes disease patterns and identifies risk factors.
- Epidemiological principles are applied in various areas like clinical research, disease prevention, and health services evaluation. Measurement tools include rates, ratios, and proportions to quantify disease frequency and burden.
- The scope of epidemiology includes measuring mortality, morbidity, disability, births, risk factors, and assessing health needs in populations. Different study designs are used to investigate disease etiology and evaluate interventions.
An overview of a key statistical technique in epidemiology – standardization - is introduced. The process and application of both direct and indirect standardization in improving the validity of comparisons between populations are described.
Incidence refers to the rate of new cases of a disease occurring in a population over a specified period of time. It is calculated by taking the number of new cases (the numerator) divided by the total person-time at risk (the denominator) and multiplying by a standard rate. The denominator considers the total time each person in the population is observed and disease-free. Incidence provides information about the risk of developing a new disease and is used to compare disease burden between populations or time periods.
The document defines key epidemiological measures used to describe disease occurrence and impact, including prevalence, incidence, rates, and ratios. It provides examples of how to calculate and interpret these measures. The document concludes that prevalence describes the current disease burden, while incidence provides information on the risk of developing disease over time and is thus better suited for etiological studies.
Introduction to epidemiology and it's measurementswrigveda
Epidemiology is defined as the study of the distribution and determinants of health-related states or events in specified populations. It has three main components - distribution, determinants, and frequency. Measurement of disease frequency involves quantifying disease occurrence and is a prerequisite for epidemiological investigation. Rates, ratios, and proportions are key tools used to measure disease frequency and distribution. Incidence rates measure new cases over time while prevalence rates measure existing cases. These measurements are essential for describing disease patterns, formulating hypotheses, and evaluating prevention programs.
This lecture introduces epidemiology by discussing its importance through achievements like smallpox eradication. It defines epidemiology as the study of health-related states in populations to prevent and control health problems. The lecture describes John Snow's contribution by showing cholera is spread through contaminated water before germ theory. It outlines the epidemiological approach as asking questions to make comparisons, and aims of epidemiology as describing disease distribution, identifying causes, and providing data for prevention planning.
The document discusses approaches for studying disease etiology, including observational studies like ecological, cohort, and case-control studies as well as randomized trials. It also examines how evidence for a causal relationship between a factor and disease has been established through a sequence of studies, from initial clinical observations to randomized trials. Key figures in establishing causal relationships for various diseases are also mentioned, such as Alton Ochsner's work linking smoking to lung cancer and Barry Marshall and J. Robin Warren's discovery of H. pylori's role in peptic ulcers. Guidelines for determining causation, such as those from the Surgeon General and Bradford Hill, are also reviewed.
Epidemiology lecture 2 measuring disease frequencyINAAMUL HAQ
This document discusses measuring disease frequency in epidemiology. It defines key terms like incidence, prevalence, population at risk, and rates. Incidence refers to new cases in a specified time period, while prevalence looks at total current cases. Prevalence can be point prevalence (at a point in time), period prevalence (over a specified time period), or lifetime prevalence. The document provides examples of calculating prevalence from population data and discusses how prevalence is used to understand disease burden and plan health services.
This document provides an overview of epidemiology, including its basic concepts, principles, scope, and measurement tools. Some key points:
- Epidemiology is the study of disease distribution and determinants in populations, and is used to prevent and control health problems. It describes disease patterns and identifies risk factors.
- Epidemiological principles are applied in various areas like clinical research, disease prevention, and health services evaluation. Measurement tools include rates, ratios, and proportions to quantify disease frequency and burden.
- The scope of epidemiology includes measuring mortality, morbidity, disability, births, risk factors, and assessing health needs in populations. Different study designs are used to investigate disease etiology and evaluate interventions.
An overview of a key statistical technique in epidemiology – standardization - is introduced. The process and application of both direct and indirect standardization in improving the validity of comparisons between populations are described.
Sensitivity & Specificity ( Andy Ni)ayi Furqon
This document discusses sensitivity and specificity in diagnostic testing. Sensitivity measures the proportion of true positives identified by a test, while specificity measures the proportion of true negatives. A test with high sensitivity and specificity is more accurate at detecting a disease. The document provides examples of calculating sensitivity, specificity, positive predictive value, and negative predictive value from 2x2 contingency tables. It also discusses how prevalence impacts predictive values and how sensitivity and specificity relate to type I error and statistical power.
This document provides an overview of measuring the burden of disease. It discusses the evolution of summary measures of population health, including health expectancies like HALE and QALE, and health gaps like DALYs. The Global Burden of Disease study is introduced, which developed the DALY measure. DALYs combine years of life lost to premature mortality and years lived with disability. The document explains how DALYs are calculated, including incorporating social values through disability weights, age weights, and time discounting. Criticisms of the GBD methodology and DALY measure are also summarized.
Epidemiology is the study of the distribution and determinants of health-related states or events in populations and the application of this study to control health problems. The basic measurements used in epidemiology include rates, ratios, and proportions to describe the occurrence of mortality, morbidity, disability, and other disease attributes in populations. Rates express the frequency of events over time, proportions express the relationship between parts and the whole, and ratios compare two rates or quantities. These measurements are essential tools for epidemiologists to investigate disease causation, describe population health status, and evaluate interventions.
A great presentation from a well versed friend in research and EBM, Dr Yaser Faden.
This is a simple introduction to study design with an accompanying workshop to simplify the different types of research study designs.
Measures of mortality provide important information for epidemiological studies. They include crude death rate, specific death rates, case fatality rate, proportional mortality rate, and survival rate. Standardized rates allow for comparisons between populations with different age compositions. Some challenges include incomplete reporting, inaccurate information, and non-uniformity across locations. However, mortality measures are useful for explaining trends, prioritizing health issues, designing interventions, and assessing public health programs.
This document discusses various measures used to quantify disease frequency in epidemiology. It describes measures of morbidity including incidence, prevalence, and disability rates. Incidence measures new cases over time while prevalence measures total current cases. Disability rates quantify limitations in activities. Measures of mortality are also presented, such as crude death rate, case fatality rate, and standardized mortality ratio. Standardization adjusts for differences in population characteristics to allow valid comparisons. Overall, the document provides an overview of key epidemiological metrics for quantifying disease burden and guiding public health efforts.
This document discusses various methods for measuring disease frequency and occurrence in populations, including rates, ratios, proportions, prevalence, and incidence. It provides examples of how to calculate rates of prevalence and incidence. Prevalence is a measure of existing cases at a point in time, while incidence describes new cases occurring over time. Both are important for epidemiological research, disease surveillance, and health planning.
History Of Epidemiology for Graduate and Postgraduate studentsTauseef Jawaid
This document provides a summary of the history of epidemiology from ancient times to the present. It describes key figures and discoveries such as Hippocrates' association of disease with environment, Jenner's pioneering of vaccination, Snow's mapping of a cholera outbreak to a contaminated well. More recent developments discussed include the founding of the U.S. Public Health Service, landmark studies like the Tuskegee syphilis study, and future challenges of globalization and overcrowding facilitating disease spread.
This document discusses various measures used to quantify drug use and outcomes in pharmacoepidemiological studies. It describes prevalence as the proportion of people with a disease or exposed to a drug at a given time. Incidence is the number of new cases within a time period, while incidence rate is the number of new cases per unit of person-time at risk. Drug use is commonly measured by the number of prescriptions, units of drug dispensed, defined daily doses which estimates average maintenance dose, and prescribed daily doses which is the average dose actually prescribed. Adherence is often measured through biological assays, pill counts, pharmacy records, and patient interviews.
At the end of this session, the students shall be able to, Define Cause
Define Association
Define Correlation
Types of association
Additional criteria for judging causality
Differentiate between association and causation
This document contains 26 slides presented by Dr. Rizwan S A on cohort studies. It defines cohort studies as prospective longitudinal studies that follow healthy populations over time to determine the causes of diseases. Key aspects covered include classifying cohort studies as prospective, retrospective or combined; describing the elements of cohort studies such as selecting and following subjects, measuring exposure and outcomes, and analyzing results using measures like relative risk, risk difference and attributable risk. Examples of famous cohort studies on smoking, heart disease and oral contraceptives are also provided.
1. Epidemiology is the study of the distribution and determinants of health-related states or events in specified populations, and the application of this study to control health problems. Descriptive epidemiology aims to describe patterns of disease, while analytical epidemiology aims to identify risk factors.
2. Key approaches in epidemiology include observational studies like cross-sectional and case-control studies, as well as experimental studies like randomized controlled trials. Important concepts include rates, ratios, and proportions used to describe disease frequency and distribution.
A cohort study follows groups of individuals (the cohorts) over time to examine how exposures affect outcomes. Key features include:
1. Cohorts are identified prior to the outcome and followed prospectively to determine disease frequency.
2. Cohort studies directly estimate relative risks by comparing disease incidence between exposed and unexposed groups.
3. They provide data on disease progression, risk factors, and natural history that can inform prevention strategies by identifying modifiable risk exposures.
The document discusses various measures used to quantify disease occurrence and mortality rates. It defines key terms like prevalence, incidence, rates, ratios and standardized rates. Prevalence is a snapshot of disease at a point in time while incidence describes new cases occurring over time. Crude rates are calculated for the entire population while specific rates are for subpopulations. Standardized rates allow comparison between populations by adjusting for differences in age or other distributions. Methods like direct and indirect standardization are used to derive adjusted rates. Mortality data from vital statistics provides important public health indicators but has issues like accuracy of documentation and changing disease classifications over time.
The STUDY of the DISTRIBUTION and DETERMINANTS of HEALTH-RELATED STATES in specified POPULATIONS, and the application of this study to CONTROL of health problems."
Standardization of rates by Dr. Basil TumainiBasil Tumaini
Standardization of rates by Dr. Basil Tumaini, presented during the residency at Muhimbili University of Health and Allied Sciences, Epidemiology class
The document discusses different types of epidemiological studies, including descriptive studies like case reports and case series that focus on person, place and time to create hypotheses. Analytical studies like case-control and cohort studies are used to test hypotheses by being either observational or interventional. Randomized controlled trials are the gold standard for comparing new interventions. Observational analytical studies include cross-sectional, cohort and case-control designs, while interventional analytical studies are clinical trials. The appropriate study design depends on the research goals and objectives.
Epidemiology is a basic discipline essential to both clinical and community medicines. It also helps to develop the way of thinking about health and disease.
Carl koppeschaar: Disease Radar: Measuring and Forecasting the Spread of Infe...Flávio Codeço Coelho
Sander van Noort
Communication &
recruitment
Sander van Noort
Marijn de Bruin
Data analysis
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sensitivity & Specificity ( Andy Ni)ayi Furqon
This document discusses sensitivity and specificity in diagnostic testing. Sensitivity measures the proportion of true positives identified by a test, while specificity measures the proportion of true negatives. A test with high sensitivity and specificity is more accurate at detecting a disease. The document provides examples of calculating sensitivity, specificity, positive predictive value, and negative predictive value from 2x2 contingency tables. It also discusses how prevalence impacts predictive values and how sensitivity and specificity relate to type I error and statistical power.
This document provides an overview of measuring the burden of disease. It discusses the evolution of summary measures of population health, including health expectancies like HALE and QALE, and health gaps like DALYs. The Global Burden of Disease study is introduced, which developed the DALY measure. DALYs combine years of life lost to premature mortality and years lived with disability. The document explains how DALYs are calculated, including incorporating social values through disability weights, age weights, and time discounting. Criticisms of the GBD methodology and DALY measure are also summarized.
Epidemiology is the study of the distribution and determinants of health-related states or events in populations and the application of this study to control health problems. The basic measurements used in epidemiology include rates, ratios, and proportions to describe the occurrence of mortality, morbidity, disability, and other disease attributes in populations. Rates express the frequency of events over time, proportions express the relationship between parts and the whole, and ratios compare two rates or quantities. These measurements are essential tools for epidemiologists to investigate disease causation, describe population health status, and evaluate interventions.
A great presentation from a well versed friend in research and EBM, Dr Yaser Faden.
This is a simple introduction to study design with an accompanying workshop to simplify the different types of research study designs.
Measures of mortality provide important information for epidemiological studies. They include crude death rate, specific death rates, case fatality rate, proportional mortality rate, and survival rate. Standardized rates allow for comparisons between populations with different age compositions. Some challenges include incomplete reporting, inaccurate information, and non-uniformity across locations. However, mortality measures are useful for explaining trends, prioritizing health issues, designing interventions, and assessing public health programs.
This document discusses various measures used to quantify disease frequency in epidemiology. It describes measures of morbidity including incidence, prevalence, and disability rates. Incidence measures new cases over time while prevalence measures total current cases. Disability rates quantify limitations in activities. Measures of mortality are also presented, such as crude death rate, case fatality rate, and standardized mortality ratio. Standardization adjusts for differences in population characteristics to allow valid comparisons. Overall, the document provides an overview of key epidemiological metrics for quantifying disease burden and guiding public health efforts.
This document discusses various methods for measuring disease frequency and occurrence in populations, including rates, ratios, proportions, prevalence, and incidence. It provides examples of how to calculate rates of prevalence and incidence. Prevalence is a measure of existing cases at a point in time, while incidence describes new cases occurring over time. Both are important for epidemiological research, disease surveillance, and health planning.
History Of Epidemiology for Graduate and Postgraduate studentsTauseef Jawaid
This document provides a summary of the history of epidemiology from ancient times to the present. It describes key figures and discoveries such as Hippocrates' association of disease with environment, Jenner's pioneering of vaccination, Snow's mapping of a cholera outbreak to a contaminated well. More recent developments discussed include the founding of the U.S. Public Health Service, landmark studies like the Tuskegee syphilis study, and future challenges of globalization and overcrowding facilitating disease spread.
This document discusses various measures used to quantify drug use and outcomes in pharmacoepidemiological studies. It describes prevalence as the proportion of people with a disease or exposed to a drug at a given time. Incidence is the number of new cases within a time period, while incidence rate is the number of new cases per unit of person-time at risk. Drug use is commonly measured by the number of prescriptions, units of drug dispensed, defined daily doses which estimates average maintenance dose, and prescribed daily doses which is the average dose actually prescribed. Adherence is often measured through biological assays, pill counts, pharmacy records, and patient interviews.
At the end of this session, the students shall be able to, Define Cause
Define Association
Define Correlation
Types of association
Additional criteria for judging causality
Differentiate between association and causation
This document contains 26 slides presented by Dr. Rizwan S A on cohort studies. It defines cohort studies as prospective longitudinal studies that follow healthy populations over time to determine the causes of diseases. Key aspects covered include classifying cohort studies as prospective, retrospective or combined; describing the elements of cohort studies such as selecting and following subjects, measuring exposure and outcomes, and analyzing results using measures like relative risk, risk difference and attributable risk. Examples of famous cohort studies on smoking, heart disease and oral contraceptives are also provided.
1. Epidemiology is the study of the distribution and determinants of health-related states or events in specified populations, and the application of this study to control health problems. Descriptive epidemiology aims to describe patterns of disease, while analytical epidemiology aims to identify risk factors.
2. Key approaches in epidemiology include observational studies like cross-sectional and case-control studies, as well as experimental studies like randomized controlled trials. Important concepts include rates, ratios, and proportions used to describe disease frequency and distribution.
A cohort study follows groups of individuals (the cohorts) over time to examine how exposures affect outcomes. Key features include:
1. Cohorts are identified prior to the outcome and followed prospectively to determine disease frequency.
2. Cohort studies directly estimate relative risks by comparing disease incidence between exposed and unexposed groups.
3. They provide data on disease progression, risk factors, and natural history that can inform prevention strategies by identifying modifiable risk exposures.
The document discusses various measures used to quantify disease occurrence and mortality rates. It defines key terms like prevalence, incidence, rates, ratios and standardized rates. Prevalence is a snapshot of disease at a point in time while incidence describes new cases occurring over time. Crude rates are calculated for the entire population while specific rates are for subpopulations. Standardized rates allow comparison between populations by adjusting for differences in age or other distributions. Methods like direct and indirect standardization are used to derive adjusted rates. Mortality data from vital statistics provides important public health indicators but has issues like accuracy of documentation and changing disease classifications over time.
The STUDY of the DISTRIBUTION and DETERMINANTS of HEALTH-RELATED STATES in specified POPULATIONS, and the application of this study to CONTROL of health problems."
Standardization of rates by Dr. Basil TumainiBasil Tumaini
Standardization of rates by Dr. Basil Tumaini, presented during the residency at Muhimbili University of Health and Allied Sciences, Epidemiology class
The document discusses different types of epidemiological studies, including descriptive studies like case reports and case series that focus on person, place and time to create hypotheses. Analytical studies like case-control and cohort studies are used to test hypotheses by being either observational or interventional. Randomized controlled trials are the gold standard for comparing new interventions. Observational analytical studies include cross-sectional, cohort and case-control designs, while interventional analytical studies are clinical trials. The appropriate study design depends on the research goals and objectives.
Epidemiology is a basic discipline essential to both clinical and community medicines. It also helps to develop the way of thinking about health and disease.
Carl koppeschaar: Disease Radar: Measuring and Forecasting the Spread of Infe...Flávio Codeço Coelho
Sander van Noort
Communication &
recruitment
Sander van Noort
Marijn de Bruin
Data analysis
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
Sander van Noort
5th International Disaster and Risk Conference IDRC 2014 Integrative Risk Management - The role of science, technology & practice 24-28 August 2014 in Davos, Switzerland
This document summarizes a research article about the Covid-19 pandemic and management strategies for businesses and the economy. It discusses how different countries adopted different strategies to reduce health and economic impacts, with some strategies being more effective than others. It also analyzes various management tools that could help avoid worse economic situations, such as scenario analysis, risk management, and data analysis. The conclusion is that observing best practices from countries with lower mortality rates can help institutions choose better strategies, and that a balance between health and economic measures must be guided by science. Management strategies and tools can help guide the response and recovery process.
This document summarizes a research article about the Covid-19 pandemic and management strategies for businesses and the economy. It discusses how different countries adopted different strategies to reduce health and economic impacts, with some strategies being more effective than others. It also analyzes various management tools that could help avoid worse economic situations, including scenario analysis, risk management, and using data and decision making. The conclusion is that observing best practices from countries with lower mortality rates can help institutions choose better strategies, and that a balance between health and economic measures needs to follow scientific principles.
THE TREND AND PRONE AREAS OF CHOLERA OUTBREAKS-A REVIEW OF THE CHOLERA LINE L...ROBERTKOGI
This document summarizes a study on the trend and prone areas of cholera outbreaks in the Ho municipality of Ghana from 2011 to 2015. The study reviewed cholera surveillance data from the line list form reported at the Ho municipal health directorate during this period. Key findings include:
- A total of 100 confirmed cholera cases were reported, with males accounting for 62.6% of cases. The overall case fatality rate was 4.2%.
- The most affected age group was 31-40 years, accounting for 23% of cases. The mean age was 35.8 years.
- Most cases occurred in April, May, and September. The number of cases varied each year but were highest in 2011
Principles and Methods of Epidemiologic StudyDugoGadisa
This document provides an introduction to epidemiology and biostatistics. It defines epidemiology as the study of patterns of health and illness in populations, while biostatistics is the application of statistical methods to biological and health-related data. The document then discusses several key epidemiological concepts such as incidence, prevalence, mortality rates, and measures used to describe disease frequency in populations.
The document provides an overview of computational epidemiology through three sentences:
It discusses the history and basic concepts of computational epidemiology, from early mathematical models of diseases like smallpox and cholera to modern networked and data-driven approaches. Computational epidemiology uses mathematical and computational methods to study disease transmission and inform public health responses to epidemics. The field aims to attract computing and data scientists to help address open problems through frameworks like graphical dynamical systems.
Innovation: managing risk, not avoiding it - reportbis_foresight
This report discusses innovation and risk from the perspective of the UK Government Chief Scientific Adviser. It argues that innovation is essential for economic growth but also carries risks. The report aims to help policymakers make better informed decisions about governing risks from innovation. It provides perspectives from different disciplines and case studies on topics like GM crops and financial services regulation to illustrate how risk decisions can significantly impact outcomes. The report invites public debate on applying principles of risk governance to local, national, European and global innovation decisions.
This document provides an introduction to epidemiology. It defines key epidemiological concepts like disease, health, and what epidemiology studies. Epidemiology examines the distribution and determinants of disease in populations. It describes who gets sick and why by studying both sick and healthy individuals. The document outlines John Snow's study of a cholera outbreak in London and how he used epidemiological methods to determine the water source was the cause. Descriptive epidemiology examines person, place and time factors to describe disease patterns, while analytical epidemiology tests hypotheses about causes using exposures and effects. The epidemiological triangle of host, agent, and environment is also introduced to frame the study of disease causation.
Epidemiology, as the applied instrument of public health interventions, can provide much needed information on which a rational, effective, and ?exible policy for the management of disasters can be based. In particular, epidemiology provides the tools for rapid and effective problem solving during public health emergencies, such as natural and technologic disasters and emergencies from terrorism.
22. TCI Climate of the Nation Flagship Report 2012Richard Plumpton
This document summarizes the findings of a report on Australian attitudes toward climate change in 2012. It was conducted through focus groups and surveys between April and May 2012, a time of highly politicized debate around climate change policies in Australia. The research found that Australians were uncertain about the science of climate change, unconvinced by carbon pricing solutions due to fears over rising costs of living, and had lost confidence in experts and governments on the issue. However, attitudes remained fluid and could still be influenced on both the reality and solutions regarding climate change.
The document provides background information and instructions for multiple assignments related to epidemiology principles and concepts. It includes descriptions of historical figures who advanced epidemiology, instructions for analyzing a 1940 outbreak investigation, explanations of key epidemiology terms and concepts, and directions for calculating disease rates and assessing diagnostic tests.
This document discusses mathematical models for infectious disease outbreaks. It begins by defining a mathematical model and explaining how models are used in fields like epidemiology. It then discusses the R0 value, which indicates how infectious a disease is, and shows how this can be used in simple models to predict disease spread. The document focuses on the 2001 UK foot and mouth disease epidemic, describing the outbreak and how different models were used to understand disease spread and inform control strategies like vaccination rings. It emphasizes that while models provided guidance, political and behavioral factors also influenced the real epidemic trajectory.
“The Experimental Child”: Mental and Social Consequences for Children and Fam...Université de Montréal
Abstract
Not only is the coronavirus crisis a natural laboratory of stress offering social psychiatrists a unique historical opportunity to observe its impact on entire populations around the world, but the responses to the crisis by international health authorities, such as the WHO, along with national and local educational institutions and health care and social services, are creating an unprecedented and unpredictable environment for children and youth. This hostile new environment for growth and development is marked by the sudden and unpredictable imposition of confinement and social isolation, cutting off or limiting opportunities for the development of cognitive abilities, peer relationships, and social skills, while exposing vulnerable children and youth to depriving, negligent, or even abusive home environments.
For this reason, this crisis has been renamed a syndemic, encompassing two different categories of disease—an infectious disease (SARS-CoV-2) and an array of non-communicable diseases (NCDs). Together, these conditions cluster within specific populations following deeply-embedded patterns of inequality and vulnerability (Horton, 2020). And children are the most vulnerable population around the world. The impact on children is part of a cascade of consequences affecting societies at large, smaller communities, and the multigenerational family, all of which impinge on children and youth as the lowest common denominator (Di Nicola & Daly, 2020).
This exceptional set of circumstances—in response not only to the biomedical and populational health aspects but also in constructing policies for entire societies—is creating an “experimental childhood” for billions of children and youth around the world. With its commitment to the social determinants of health and mental health, notably in light of the monumental Adverse Childhood Events (ACE) studies (Felitti & Anda, 2010), social psychiatry and global mental health in partner with child and family psychiatry and allied professions must now consider their roles for the future of these “experimental children” around the world. The parameters for observing the conditions of this coronavirus-induced syndemic in the family and in society, along with recommendations for social psychiatric interventions, and prospective paediatric, psychological, and social studies will be outlined.
Keywords: children & families, coronavirus syndemic, ACE Study, confinement, social isolation
Discussion 1 REPLYDescription The source I found w.docxduketjoy27252
This document discusses a source used to support the author's thesis about implementing safe needle disposal programs. The source is from the EPA and discusses the health risks of improper needle disposal. The author chose this source because it is from a credible government agency and supports their argument. Using a reliable government source helps provide credibility.
epidemiology with part 2 (complete) 2.pptAmosWafula3
This document provides an overview of epidemiology. It begins by defining epidemiology as the study of what falls upon populations in terms of health and disease. A modern definition is provided that describes epidemiology as studying the distribution and determinants of health states in populations.
The objectives and purposes of epidemiology are then outlined, which include describing disease distribution and magnitude, identifying risk factors, providing data for prevention/control programs, and recommending interventions. Key epidemiological terms like incidence, prevalence, endemic, epidemic, and pandemic are also defined. Descriptive and analytical study designs commonly used in epidemiology like cross-sectional and case-control studies are described. The document concludes by contrasting the approaches of epidemiology versus clinical medicine
This chapter introduces communicable diseases and their epidemiology in Ethiopia. It defines key epidemiological terms used to describe diseases. Communicable diseases pose a major health burden in Ethiopia. Many factors contribute to their transmission, including poverty, poor sanitation and lack of access to health care. The major communicable diseases affecting Ethiopia are described.
This document is a manual published by the World Health Organization in 1997 on vector control methods for use by individuals and communities. It contains 10 chapters that describe the biology, public health importance, and control measures for various disease vectors, including mosquitoes, tsetse flies, triatomine bugs, fleas, lice, ticks, mites, cockroaches, houseflies, freshwater snails, and cyclops. For each vector, the manual provides details on its life cycle, disease transmission, and recommends methods for personal protection as well as community-based control strategies.
The 10-step approach to outbreak investigations involves:
1) Identifying an investigation team and resources.
2) Establishing the existence of an outbreak.
3) Verifying the diagnosis, constructing a case definition, and finding cases systematically.
Descriptive epidemiology is then used to develop hypotheses, which are evaluated through additional studies if needed, before implementing control measures, communicating findings, and maintaining surveillance to confirm the outbreak has ended. Being systematic and following these steps is key to determining the source and controlling outbreaks.
According to a new assessment by the UN Food and Agriculture Organization and Famine Early Warning Systems Network, around 731,000 Somalis face acute food insecurity and 2.3 million more are at risk. This brings the total number of people in need of humanitarian assistance to 3 million. Malnutrition rates remain high, with nearly 203,000 children acutely malnourished. The humanitarian situation has improved in some areas due to above average rainfall and increased aid, but concerns remain for 2015. The humanitarian response plan requests $863 million to address ongoing needs and prevent a major crisis from undoing recent peace and state building progress in Somalia.
This document discusses communicable diseases. It defines communicable diseases as diseases that can spread from one person to another through various modes of transmission like air, water, food, or contact. Some common communicable diseases mentioned include influenza, polio, typhoid, measles, mumps, chickenpox, tuberculosis, and AIDS. It also discusses immunity and how the body develops immunity to diseases either naturally after suffering from an illness or artificially through vaccination. Preventing the spread of communicable diseases requires measures like maintaining hygiene, immunization, and promptly treating illnesses.
This document outlines the Canadian Nurses Association's position on primary health care. It believes primary health care is integral to improving health outcomes for Canadians and that its principles, such as accessibility, health promotion, and intersectoral collaboration, are the most effective way to provide equitable healthcare. The CNA also believes primary health care and nursing are closely connected, and nursing standards and education should be grounded in primary health care principles. Adopting a primary health care approach could help address rising healthcare costs and improve Canada's performance on health indicators relative to other countries.
This document provides an overview of general nutrition concepts. It defines key terms like food, nutrition, diet, and malnutrition. It outlines the six major nutrients - carbohydrates, proteins, fats, vitamins, minerals, and water. The document discusses dietary guidelines and food groups. It explains that human beings need food to provide energy for essential physiological functions like respiration, circulation, digestion, metabolism, maintaining body temperature, growth, and repair of tissues. The most vulnerable groups who require adequate nutrition are infants, young children, pregnant women, and lactating mothers.
The document outlines a road map to accelerate HIV prevention efforts to meet global targets of reducing new HIV infections by 75% by 2020. It finds that while progress has been made, declines in new infections have been too slow, with only 1.7 million new infections in 2016, an 11% decline since 2010. Of 25 focus countries, only 3 saw over 30% declines, while 8 had no decline or increases. No country met the 2015 target of 50% reduction. Faster progress is needed to avoid increased treatment costs and continued mother-to-child transmission programs. The road map proposes intensified prevention programs, especially for adolescent girls, young women and key populations.
This document discusses the key ethical issues that arise in public health surveillance programs. It begins with a brief history of public health surveillance and definitions of key terms. The main ethical problem discussed is the potential conflict between individual interests/rights and collective interests. While clinical ethics focuses on individual physician-patient relationships, public health ethics must consider the broader community. Some argue the ethics of public health and clinical practice are distinctly different given this shift from individual to collective interests. The document examines how tools and checklists can help evaluate the ethical acceptability of surveillance programs.
This document provides an overview of planning and management for health extension workers. It defines management as a process of reaching organizational goals through people and resources. The key functions of management are planning, organizing, staffing, directing, and controlling. Planning involves setting objectives and strategies, while evaluation assesses progress towards objectives. Communication and decision-making are also integral to the management process. Effective management applies principles like management by objectives and learning from experience. The roles of administration and management are also distinguished, with administration focusing more on policy and management on execution.
The document provides recommendations for surveillance of acute viral hepatitis. It defines clinical and laboratory criteria for diagnosing hepatitis A, B, and non-A/non-B. Surveillance is recommended to guide control measures like ensuring blood and injection safety and immunization programs. Countries should monitor cases of acute jaundice and increase in liver enzymes to detect hepatitis outbreaks and evaluate prevention programs. Standardized case definitions and laboratory tests are important for comparable surveillance data.
This document provides an introduction to a module on the Expanded Program on Immunization (EPI) in Ethiopia. The module aims to train health center teams and other health professionals to increase immunization coverage and reduce morbidity and mortality from six childhood diseases. Despite initiatives over the years, immunization coverage remains low in Ethiopia due to factors like lack of transportation, ineffective cold chains, shortage of trained staff, poor collaboration, and inadequate community involvement. The module seeks to address this through training and bringing about significant changes in EPI coverage.
This document provides a handbook on water programming published by UNICEF in 1999. It aims to guide field professionals in implementing UNICEF's water, environment and sanitation strategies. The handbook covers topics such as water and sustainable development, community participation and management, cost effectiveness, appropriate water technologies, and maintenance of water supply systems. It emphasizes the importance of community-based management of water resources, cost-effective solutions, and involvement from all levels of government and communities in water sector issues.
This document provides an introduction to the Somali PHAST Step-by-Step Guide, which uses participatory methods to help communities improve hygiene behaviors, prevent diarrheal diseases, and encourage community management of water and sanitation facilities. The guide contains 7 steps to take communities through developing a plan for preventing diarrheal diseases. Section 2 provides background concepts, defining hygiene, sanitation, the link between the two, and that hygiene and sanitation promotion requires more than just asking people to change - it requires understanding disease transmission and being motivated to promote positive behaviors.
The development of this lecture note for training Health Extension workers is an arduous assignment for Dr. Meseret Yazachew and Dr. Yihenew Alem at Jimma University.
This document was developed with inputs from many institutions and experts. Several individuals deserve special mention. Mary Arimond, Kathryn Dewey and Marie Ruel developed the analytical framework and provided technical oversight throughout the project. Eunyong Chung and Anne Swindale provided technical support. Nita Bhandari, Roberta Cohen, Hilary Creed de Kanashiro, Christine Hotz, Mourad Moursi, Helena Pachon and Cecilia C. Santos-Acuin conducted analysis of data sets. Chessa Lutter coordinated a working group to update the breastfeeding indicators. Mary Arimond and Megan Deitchler coordinated the working group that developed the Operational Guide on measurement issues which is a companion to this document. Bernadette Daelmans and José Martines coordinated the project throughout its phases. Participants in the consensus meetings held in Geneva 3–4 October 2006 and in Washington, DC 6–8 November 2007 provided invaluable inputs to formulate the recommendations put forward in this document.
POLICY MAKING PROCESS
Policy
• a statement of intent for achieving an objective.
• Deliberate statement aimed at achieving specific objective
• policies are formulated by the Government in order to provide
a guideline in attaining certain objectives for the benefit of the
people.
• Importance and objective of any policy
• to solve existing challenges/problems in any society
• used as a tool to safeguard and ensure better services to
members of the society.
• Reasons for formulating a Policy
• Reforms (socio-economic, technological advancements, etc)
within and outside the country.
This document describes a case-control study conducted to determine the reason for many students failing an exam. The study found that students who did not attend lectures had an 80 times higher chance of failing compared to students who did attend, and that this result was statistically significant with a p-value less than 0.05, suggesting not attending lectures was the likely cause of failure.
Aim of nutritional assessment
To identify nutritional problems of the community
To find the underlying cause for malnutrition
To plan and implement control of malnutrition
Maintain good nutrition of community
Ancylostomiasis, or hookworm infection, is an important global public health problem caused by parasitic hookworms that infect humans. It is transmitted when larvae penetrate the skin and enter the body, usually through walking barefoot on contaminated soil. In Libya, hookworm infection is very rare, with most cases found in farmers who come into contact with infected feces in soil. The hookworms live in the intestine and feed on blood, potentially causing iron deficiency anemia and related health issues if left untreated. Prevention relies on sanitary disposal of human waste and health education to avoid transmission.
Here is the updated list of Top Best Ayurvedic medicine for Gas and Indigestion and those are Gas-O-Go Syp for Dyspepsia | Lavizyme Syrup for Acidity | Yumzyme Hepatoprotective Capsules etc
Cell Therapy Expansion and Challenges in Autoimmune DiseaseHealth Advances
There is increasing confidence that cell therapies will soon play a role in the treatment of autoimmune disorders, but the extent of this impact remains to be seen. Early readouts on autologous CAR-Ts in lupus are encouraging, but manufacturing and cost limitations are likely to restrict access to highly refractory patients. Allogeneic CAR-Ts have the potential to broaden access to earlier lines of treatment due to their inherent cost benefits, however they will need to demonstrate comparable or improved efficacy to established modalities.
In addition to infrastructure and capacity constraints, CAR-Ts face a very different risk-benefit dynamic in autoimmune compared to oncology, highlighting the need for tolerable therapies with low adverse event risk. CAR-NK and Treg-based therapies are also being developed in certain autoimmune disorders and may demonstrate favorable safety profiles. Several novel non-cell therapies such as bispecific antibodies, nanobodies, and RNAi drugs, may also offer future alternative competitive solutions with variable value propositions.
Widespread adoption of cell therapies will not only require strong efficacy and safety data, but also adapted pricing and access strategies. At oncology-based price points, CAR-Ts are unlikely to achieve broad market access in autoimmune disorders, with eligible patient populations that are potentially orders of magnitude greater than the number of currently addressable cancer patients. Developers have made strides towards reducing cell therapy COGS while improving manufacturing efficiency, but payors will inevitably restrict access until more sustainable pricing is achieved.
Despite these headwinds, industry leaders and investors remain confident that cell therapies are poised to address significant unmet need in patients suffering from autoimmune disorders. However, the extent of this impact on the treatment landscape remains to be seen, as the industry rapidly approaches an inflection point.
TEST BANK For Community Health Nursing A Canadian Perspective, 5th Edition by...Donc Test
TEST BANK For Community Health Nursing A Canadian Perspective, 5th Edition by Stamler, Verified Chapters 1 - 33, Complete Newest Version Community Health Nursing A Canadian Perspective, 5th Edition by Stamler, Verified Chapters 1 - 33, Complete Newest Version Community Health Nursing A Canadian Perspective, 5th Edition by Stamler Community Health Nursing A Canadian Perspective, 5th Edition TEST BANK by Stamler Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Pdf Chapters Download Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Pdf Download Stuvia Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Study Guide Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Ebook Download Stuvia Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Questions and Answers Quizlet Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Studocu Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Quizlet Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Stuvia Community Health Nursing A Canadian Perspective, 5th Edition Pdf Chapters Download Community Health Nursing A Canadian Perspective, 5th Edition Pdf Download Course Hero Community Health Nursing A Canadian Perspective, 5th Edition Answers Quizlet Community Health Nursing A Canadian Perspective, 5th Edition Ebook Download Course hero Community Health Nursing A Canadian Perspective, 5th Edition Questions and Answers Community Health Nursing A Canadian Perspective, 5th Edition Studocu Community Health Nursing A Canadian Perspective, 5th Edition Quizlet Community Health Nursing A Canadian Perspective, 5th Edition Stuvia Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Pdf Chapters Download Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Pdf Download Stuvia Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Study Guide Questions and Answers Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Ebook Download Stuvia Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Questions Quizlet Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Studocu Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Quizlet Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Stuvia
These lecture slides, by Dr Sidra Arshad, offer a quick overview of the physiological basis of a normal electrocardiogram.
Learning objectives:
1. Define an electrocardiogram (ECG) and electrocardiography
2. Describe how dipoles generated by the heart produce the waveforms of the ECG
3. Describe the components of a normal electrocardiogram of a typical bipolar lead (limb II)
4. Differentiate between intervals and segments
5. Enlist some common indications for obtaining an ECG
6. Describe the flow of current around the heart during the cardiac cycle
7. Discuss the placement and polarity of the leads of electrocardiograph
8. Describe the normal electrocardiograms recorded from the limb leads and explain the physiological basis of the different records that are obtained
9. Define mean electrical vector (axis) of the heart and give the normal range
10. Define the mean QRS vector
11. Describe the axes of leads (hexagonal reference system)
12. Comprehend the vectorial analysis of the normal ECG
13. Determine the mean electrical axis of the ventricular QRS and appreciate the mean axis deviation
14. Explain the concepts of current of injury, J point, and their significance
Study Resources:
1. Chapter 11, Guyton and Hall Textbook of Medical Physiology, 14th edition
2. Chapter 9, Human Physiology - From Cells to Systems, Lauralee Sherwood, 9th edition
3. Chapter 29, Ganong’s Review of Medical Physiology, 26th edition
4. Electrocardiogram, StatPearls - https://www.ncbi.nlm.nih.gov/books/NBK549803/
5. ECG in Medical Practice by ABM Abdullah, 4th edition
6. Chapter 3, Cardiology Explained, https://www.ncbi.nlm.nih.gov/books/NBK2214/
7. ECG Basics, http://www.nataliescasebook.com/tag/e-c-g-basics
- Video recording of this lecture in English language: https://youtu.be/kqbnxVAZs-0
- Video recording of this lecture in Arabic language: https://youtu.be/SINlygW1Mpc
- Link to download the book free: https://nephrotube.blogspot.com/p/nephrotube-nephrology-books.html
- Link to NephroTube website: www.NephroTube.com
- Link to NephroTube social media accounts: https://nephrotube.blogspot.com/p/join-nephrotube-on-social-media.html
Local Advanced Lung Cancer: Artificial Intelligence, Synergetics, Complex Sys...Oleg Kshivets
Overall life span (LS) was 1671.7±1721.6 days and cumulative 5YS reached 62.4%, 10 years – 50.4%, 20 years – 44.6%. 94 LCP lived more than 5 years without cancer (LS=2958.6±1723.6 days), 22 – more than 10 years (LS=5571±1841.8 days). 67 LCP died because of LC (LS=471.9±344 days). AT significantly improved 5YS (68% vs. 53.7%) (P=0.028 by log-rank test). Cox modeling displayed that 5YS of LCP significantly depended on: N0-N12, T3-4, blood cell circuit, cell ratio factors (ratio between cancer cells-CC and blood cells subpopulations), LC cell dynamics, recalcification time, heparin tolerance, prothrombin index, protein, AT, procedure type (P=0.000-0.031). Neural networks, genetic algorithm selection and bootstrap simulation revealed relationships between 5YS and N0-12 (rank=1), thrombocytes/CC (rank=2), segmented neutrophils/CC (3), eosinophils/CC (4), erythrocytes/CC (5), healthy cells/CC (6), lymphocytes/CC (7), stick neutrophils/CC (8), leucocytes/CC (9), monocytes/CC (10). Correct prediction of 5YS was 100% by neural networks computing (error=0.000; area under ROC curve=1.0).
1. 1/25/2011 Incidence and prevalence 1
Epidemiologic measures: Incidence &
prevalence
Principles of Epidemiology for Public Health (EPID600)
Victor J. Schoenbach, PhD home page
Department of Epidemiology
Gillings School of Global Public Health
University of North Carolina at Chapel Hill
www.unc.edu/epid600/
2. 5/20/2002 Incidence and prevalence 2
Quotations that demonstrate the value of
humility about predicting the future
(authenticity not established)
Courtesy of Suzanne Cloutier, 11/17/1998
Famous last words
3. 5/20/2002 Incidence and prevalence 3
“Louis Pasteur's theory of germs is
ridiculous fiction.”
- Pierre Pachet, Professor of Physiology at Toulouse, 1872
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
4. 5/20/2002 Incidence and prevalence 4
“This `telephone´ has too many
shortcomings to be seriously
considered as a means of communication.
The device is inherently of no value to us.”
- Western Union internal memo, 1876
(Source: 2000 National Ernst & Young Entrepreneur of the
Year Awards special insert in USA Today, 2/11/2000, p9B)
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
5. 5/20/2002 Incidence and prevalence 5
“Everything that can be invented
has been invented.”
- Charles H. Duell, Commissioner, US Patent Office,1899
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
6. 9/11/2005 Incidence and prevalence 6
“The wireless music box has no imaginable
commercial value. Who would pay for a
message sent to nobody in particular?”
- David Sarnoff's associates in response to his
urgings for investment in radio in the 1920s.
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
7. 1/25/2011 Incidence and prevalence 7
“My thesis in this lecture is that
macroeconomics . . . has succeeded: Its
central problem of depression-prevention
has been solved, for all practical purposes,
and has in fact been solved for many
decades.”
- Robert E. Lucas, Jr., American Economics Association
Presidential Address, January 10, 2003
http://home.uchicago.edu/~sogrodow/homepage/paddress03.pdf
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
8. 1/25/2011 Incidence and prevalence 8
The population perspective requires
measuring disease in populations
• Science is built on classification and
measurement.
• Reality is infinitely detailed, infinitely complex.
• Classification and measurement seek to capture
the essential attributes.
9. 1/25/2011 Incidence and prevalence 9
Deriving meaning from stimuli
Vase or faces?
Which line is
longer?
10. 1/25/2011 Incidence and prevalence 10
Measurement “captures” the phenomenon
Classification and measurement are based on:
1. Objective of the classification
2. Conceptual model (understanding of the
phenomenon)
3. Availability of data (technology)
11. 5/20/2002 Incidence and prevalence 11
O
O
O
O
O
O
O
An example population (N=200)
12. 1/25/2011 Incidence and prevalence 12
O
How can we quantify disease in populations?
13. 1/25/2011 Incidence and prevalence 13
O
O
How can we quantify disease in populations?
14. 5/20/2002 Incidence and prevalence 14
O
O
O
How can we quantify disease in populations?
15. 5/20/2002 Incidence and prevalence 15
O
O
O
OO
How can we quantify disease in populations?
16. 5/20/2002 Incidence and prevalence 16
O
O
O
OO
O
How can we quantify disease in populations?
17. 1/25/2011 Incidence and prevalence 17
O
O
O
OO
O
How can we quantify the frequency?
18. 1/25/2011 Incidence and prevalence 18
O
O
O
OO
O
Rate of occurrence of new cases
per unit time (e.g., 1 per month)
19. 5/20/2002 Incidence and prevalence 19
O
1 new case in month 1
20. 5/20/2002 Incidence and prevalence 20
O
O
1 new case in month 2
21. 5/20/2002 Incidence and prevalence 21
O
O
O
1 new case in month 3, for a total of 3 cases
22. 5/20/2002 Incidence and prevalence 22
O
O
O
OO
2 new cases in month 4
23. 5/20/2002 Incidence and prevalence 23
O
O
O
OO
O
1 new case in month 5 (total=6)
24. 5/25/2011 Incidence and prevalence 24
O
O
O
OO
O
O
1 case in month 6
25. 5/20/2002 Incidence and prevalence 25
O
O
O
OO
O
O
O
1 new case in month 7
26. 5/20/2002 Incidence and prevalence 26
O
O
O
OO
O
O
O
OO
2 new cases in month 8
27. 5/20/2002 Incidence and prevalence 27
O
O
O
OO
O
O
O
OO
O
O
2 cases in month 9
28. 1/9/2007 Incidence and prevalence 28
O
O
O
OO
O
O
O
OO
O
O
Rate of occurrence of new cases during 9 months:
1 case/month to 2 cases/month
29. 1/9/2007 Incidence and prevalence 29
Number of cases depends on length of interval
Divide by length of time interval, so can
compare across intervals
Number of new cases
Rate of new cases = –––––––––––––––––
Time interval
= 12 cases / 9 months = 1.33 cases /
month
30. 1/25/2011 Incidence and prevalence 30
Number of cases depends on population size
So, divide by population and time:
Number of new cases
Incidence rate = ––––––––––––––––––
Population-time
31. 1/25/2011 Incidence and prevalence 31
How to estimate population-time?
Population at risk: the people eligible to
become a case and to be counted as one.
In this example that population declines as
each case occurs.
So estimate population-time as . . .
32. 1/25/2011 Incidence and prevalence 32
Population-time =
Method 1: Add up the time that each person is
at risk
Method 2: Add up the population at risk during
each time segment
Method 3: Multiply the average size of the
population at risk by the length of the time
interval
33. 1/9/2007 Incidence and prevalence 33
Estimating population-time - method 2
Total population-time over 9 months =
200 + 199 + 198 + 197 + 195 + 194 + 193 +
192 + 190
= 1,758 person-months
= 146.5 person-years
However, cases are not at risk for a full
month.
34. 1/9/2007 Incidence and prevalence 34
Estimating population-time - method 2
- better
Total population-time over 9 months =
199.5 + 198.5 + 197.5 + 196 + 194.5 + 193.5
+ 192.5 + 191 + 189
= 1,752 person-months
= 146 person-years
assuming that cases develop, on average, in
the middle of the month
35. 1/9/2007 Incidence and prevalence 35
Estimating population-time - method 3
Average size of the population at risk during the
9 months = 195.3 (1,758 / 9) or approximately:
(200 + 188) /2 = 194
Population-time = 195.3 x 9 months or
(approximately) 194 x 9 months
= 1,746 person-months
= 145.5 person-years
36. 1/9/2007 Incidence and prevalence 36
Equivalent to - method 3
Take initial size of population at risk and reduce
it for time the people were not at risk due to
acquiring the disease:
200 - 12/2 = 194 (approximately)
Population-time = 194 x 9 months
= 1,746 person-months
= 145.5 person-years
37. 1/25/2011 Incidence and prevalence 37
Incidence rate (“incidence density”)
Number of new cases
–––––––––––––––––––––––––––––––
Avg population at risk × Time interval
Number of new cases
= ––––––––––––––––––––
Population-time
38. 1/25/2011 Incidence and prevalence 38
O
O
O
O
O
O
O
What proportion of the population
at risk are affected after 5 months?
39. 1/30/2004 Incidence and prevalence 39
O
What proportion of the population
is affected after 1 month? (1/200)
40. 5/20/2002 Incidence and prevalence 40
O
O
What proportion of the population
is affected after 2 months? (2/200)
41. 5/20/2002 Incidence and prevalence 41
O
O
O
What proportion of the population
is affected after 3 months? (3/200)
42. 5/20/2002 Incidence and prevalence 42
O
O
O
OO
What proportion of the population is
affected after 4 months? (5/200)
43. 1/9/2007 Incidence and prevalence 43
O
O
O
OO
O
6 / 200 = 0.03 = 3% = 30 / 1,000
in 5 months
44. 1/25/2011 Incidence and prevalence 44
Incidence proportion (“cumulative incidence”)
Number of new cases
5-month CI = –––––––––––––––––––
Population at risk
Incidence proportion estimates risk.
45. 1/25/2011 Incidence and prevalence 45
Incidence rate versus incidence proportion
• Incidence rate measures how rapidly cases are
occurring.
• Incidence proportion is cumulative.
• When care only about the “bottom line” (i.e.,
what has happened by the end of given period):
incidence proportion (CI).
46. 1/25/2011 Incidence and prevalence 46
Incidence rate versus incidence proportion
• If risk period is long (e.g., cancer), we usually
observe only a portion.
• To compare results from studies with different
length of follow-up, use incidence rate (IR)
• If risk period is short, we usually observe all of it
and can use incidence proportion.
47. Incidence rate versus incidence proportion
(rare disease, IR = 0.005 / month)
1/25/2011 Incidence and prevalence 47
(see spreadsheet at epidemiolog.net/studymat/)
48. Incidence rate versus incidence proportion
(common disease, IR = 0.1 / month)
2/7/2012 Incidence and prevalence 48
49. 5/20/2002 Incidence and prevalence 49
Case fatality rate
“Case fatality rate” (but it’s really a proportion)
= proportion of cases who die
(in a specified time interval)
• Like a “cumulative incidence of death” in cases
[ “incidence rate of death” in cases =
“termination rate” = 1/(average survival time)]
50. 1/30/2007 Incidence and prevalence 50
Mortality rate
Number of deaths
Mortality rate = ––––––––––––––––––––––––––––
Population at risk × Time interval
Number of deaths
Annual mortality rate = ––––––––––––––––––––––
Mid-year population (x 1 yr)
51. 6/6/2002 Incidence and prevalence 51
Mortality rate (more notes)
Number of deaths
Mortality rate = ––––––––––––––––––––––––––––
Population at risk × Time interval
Number of deaths
Annual mortality rate = ––––––––––––––––––
Mid-year population
52. 5/20/2002 Incidence and prevalence 52
Mortality rates versus incidence rates
• Mortality data are more generally available
• Fatality reflects many factors, so mortality
rates may not be a good surrogate of incidence
rates
• Death certificate cause of death not always
accurate or useful
53. 1/9/2007 Incidence and prevalence 53
Prevalence – another important proportion
Number of existing (and new) cases
Prevalence =
–––––––––––––––––––––––––––––––
Population at risk
54. 5/20/2002 Incidence and prevalence 54
O
O
O
OO
O
O
1 new case, 1 death
55. 5/20/2002 Incidence and prevalence 55
O
O
O
OO
O
O
O
1 new case, 1 new death
56. 5/20/2002 Incidence and prevalence 56
O
O
O
OO
O
O
O
OO
2 new cases, no deaths
57. 5/20/2002 Incidence and prevalence 57
O
O
O
OO
O
O
O
OO
O
O
2 new cases, 1 new death
58. 5/20/2002 Incidence and prevalence 58
O
O
O
OO
O
O
O
OO
O
O
What is the prevalence? (9 / 197)
59. 5/20/2002 Incidence and prevalence 59
Fine points . . .
•Who is “at risk”?
• Endometrial cancer? Prostate cancer?
Breast cancer?
• Only women who have not had a
hysterectomy?
“Could” develop the condition + “would” be
counted.
60. 5/20/2002 Incidence and prevalence 60
More fine points
• Age?
• Immunity?
• Genetically susceptible?
61. 5/20/2002 Incidence and prevalence 61
More fine points . . .
• How do we measure time?
• Are 10 people followed for 10 years the
same as 100 people followed for 1 year?
• Aging of the cohort? Secular changes?
62. 9/22/2005, 9/8/2008 Incidence and prevalence 62
Fine points . . .
• Importance of stating units and scaling
unless they are clear from the context
– e.g., 120 per 100,000 person-years =
10 per 100,000 person-months
– Hazards from lack of clarity
63. 1/30/2004 Incidence and prevalence 63
“You can never, never take anything for granted.”
Noel Hinners, vice president for flight systems at
Lockheed Martin Astronautics in Denver, concerning
the loss of the Martian Climate Orbiter due to the
Lockheed Martin spacecraft team’s having reported
measurements in English units whiles the orbiter’s
navigation team at the Jet Propulsion Laboratory
(JPL) in Pasadena, California assumed the
measurements were in metric units.
64. 5/20/2002 Incidence and prevalence 64
Relation of incidence and prevalence
• Prevalence depends on incidence
• Higher incidence leads to higher prevalence if
duration of cases does not change.
• Limitation of the bathtub analogy – flow rate needs to
be expressed relative to the size of the source
• Introducing a new analogy . . .
67. 1/25/2011 Incidence and prevalence 67
Incidence, prevalence, duration of hospitalization
Remote community of 101,000 people
One hospital, patient census = 1,000
Steady state
500 admissions per week
Prevalence = 1,000/101,000 = 9.9/1,000
IR = 500/100,000 = 5/1,000/week
Duration Prevalence / IR = 2 weeks
68. 1/25/2011 Incidence and prevalence 68
Relation of incidence and prevalence
Under somewhat special conditions,
Prevalence odds = incidence × duration
Prevalence incidence × duration
(see spreadsheet at www.epidemiolog.net/studymat/)
69. 5/20/2002 Incidence and prevalence 69
Standardization
• When objective is comparability, need to
adjust for different distributions of other
determinants
• Strategy:
• Analyze within each subgroup (stratum)
• Take a weighted average across strata
• Use same weights for all populations
(See the Evolving Text on www.epidemiolog.net)
70. 8/17/2009 Incidence and prevalence 70
Familiar example of weighted averages
• Liters of petrol per kilometer - differs for Interstate
(0.050 LpK) and non-Interstate (0.100 LpK) driving.
• To compare different cars, can:
• Compare them for each type of driving separately
(stratified analysis)
• Average for each car, using one set of weights
(e.g., 80% Interstate, 20% non-Interstate)
• E.g. = 0.80 x 0.050 LpK + 0.20 x 0.100 LpK = 0.060 LpK
71. 8/17/2009 Incidence and prevalence 71
Comparing a Suburu and a Mazda
Juan drives a Suburu 800 km on Interstate highways
and 200 km on other roads. His car uses 0.050 LpK
on Interstates and 0.100 LpK on other roads, for a
total of 60 liters of petrol, an average of 0.060 LpK
(60 L / 1000 km). His overall LpK can be expressed
as a weighted average:
(800/1000) x 0.050 LpK + (200/1000) x 0.100 LpK
= 0.80 x 0.050 LpK + 0.20 x 0.100 LpK = 0.060 LpK
72. 8/17/2009 Incidence and prevalence 72
Comparing a Suburu and a Mazda
Shizu drives her Mazda on a different route, with
only 200 km on Interstate and 800 km on other
roads. She uses 0.045 lpk on Interstate highways
and 0.080 LpK on non-Interstate. She uses a total
of 73 liters, or 0.073 LpK. Her overall LpK can be
expressed as a weighted average:
(200/1,000) x 0.045 LpK + (800/1,000) x 0.080 LpK
= 0.20 x 0.045 LpK + 0.80 x 0.080 LpK =0.073 LpK
73. 8/17/2009 Incidence and prevalence 73
How can we compare their fuel efficiency?
Juan Shizu
Km LpK Km LpK
Interstate 800 0.050 200 0.045
Other 200 0.100 800 0.080
Total 1,000 0.060 1,000 0.073
74. 8/17/2009 Incidence and prevalence 74
Total fuel efficiency is not comparable
because weights are different
Juan Shizu
% LpK % LpK
Interstate 80 0.050 20 0.045
Other 20 0.100 80 0.080
Total 100% 0.060 100% 0.073
75. 8/17/2009 Incidence and prevalence 75
By adopting a “standard” set of weights we
can compare fairly
Juan Shizu
% LpK % LpK
Interstate 60 0.050 60 0.045
Other 40 0.100 40 0.080
Total 100 0.060 100 0.073
Standardized 0.070 0.059
76. 8/17/2009 Incidence and prevalence 76
Comparing a Suburu and a Mazda
•Juan’s Suburu:
= 0.60 x 0.050 LpK + 0.40 x 0.100 LpK =0.070 LpK
•Shizu’s Mazda:
= 0.60 x 0.045 LpK + 0.40 x 0.080 LpK =0.059 LpK
The choice of weights may often affect the results of
the comparison.
77. 5/20/2002 Incidence and prevalence 77
“I'm just glad it'll be Clark Gable who's
falling on his face and not Gary Cooper.”
- Gary Cooper on his decision not to take the
leading role in “Gone With The Wind”
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
78. 5/20/2002 Incidence and prevalence 78
“A cookie store is a bad idea.
Besides, the market research reports
say America likes crispy cookies,
not soft and chewy cookies like you
make.”
- Response to Debbi Fields' idea of starting
Mrs. Fields' Cookies.
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
79. 5/20/2002 Incidence and prevalence 79
“Computers in the future may weigh
no more than 1.5 tons.”
- Popular Mechanics, forecasting the
relentless march of science, 1949
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
80. 5/20/2002 Incidence and prevalence 80
“I think there is a world market
for maybe five computers.”
-Thomas Watson, chairman of IBM, 1943
FAMOUS LAST WORDS: quotations that demonstrate the value of humility in predicting the future
Editor's Notes
This lecture will cover some of the key measures of health and disease in populations, and the relations among these indicators.
Famous last words
Learning should be fun, so to put us in the right frame of mind, here are some allegedly real quotations that demonstrate the value of humility in predicting the future. Most of them were provided to me by a former EPID 168 student, but I don’t know where she found them.
“Louis Pasteur’s theory of germs is ridiculous fiction.”
This quote is attributed to Pierre Pachet, Professor of Physiology at Toulouse, in 1872.
“This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.”
According to USA Today, that assessment was in a Western Union internal memorandum, in 1876.
“Everything that can be invented has been invented.”
This declaration is attributed to Charles Duell, US Patent Office Commissioner in 1899.
“The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?”
This analysis is attributed to associates of David Sarnoff, in response to his urgings for investment in radio in the 1920s. David Sarnoff eventually became president and later chair of the Radio Corporation of America (RCA) and was a giant figure in the development of radio and television. (see http://www.museum.tv/archives/etv/S/htmlS/sarnoffdavi/sarnoffdavi.htm)
And now, back to epidemiology . . . . The next slide is titled “The population perspective requires measuring disease in populations”.
And here’s one I came across on my own, from Robert E. Lucas, Jr.’s Presidential Address to the American Economics Association on January 10, 2003 :
“My thesis in this lecture is that macroeconomics . . . has succeeded: Its central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.”
http://home.uchicago.edu/~sogrodow/homepage/paddress03.pdf
I suppose that macroeconomic theory did prevent a depression in 2008-2009, but it didn’t always feel that way!
Public health is distinguished by its population perspective, in contrast to the individual perspective of clinical medicine. Epidemiology is a basic science of population health, and in order to study population health we need to construct measures for that purpose.
Science is built on classification and measurement. Reality is infinitely detailed, infinitely complex, and it is simply not possible to work with all of that detail. Thus, every step in the process of understanding reality involves extraction and abstraction of elements we regard as meaningful. For example, although you are reading words on paper or a computer screen, your eyes are receiving a great variety of visual stimuli. Your retina is passing some but not all of these on to your visual cortex, the part of your brain responsible for vision. The visual cortex in turn passes a representation of what it has received on to the areas of your brain where cognition takes place. The visual cortex has “interpreted” the image that it received from the retina. This representation is then “interpreted” by the frontal cortex, so that you recognize and attach meanings to the letters, words, punctuation, etc. There may be imperfections on the paper or dust on the screen. Some of the letters may be slightly distorted. The intensity and sharpness of the print or image will vary. But the brain can ignore all of these “irrelevant” attributes since it has learned which attributes convey meaning and which do not.
Our ability to perceive visual and auditory objects in the presence of incomplete data, noise, and distortion illustrate how extraction and abstraction work, whereas optical illusions illustrate how these processes can mislead us.*
But there is no other way, so from the infinity of stimuli in the world we observe, we create categories and perceive relations among them which attach meaning to some differences and not to others. In doing so we try to “capture” the essential attributes of what we observe so that we can derive meaning.
Source for optical illusions: Eric H. Chudler, University of Washington. http://faculty.washington.edu/chudler/chvision.html See Science 25 June 2010;328:648
When they work well, measurement and classification “capture” the phenomenon by identifying and/or quantifying the essential attributes or characteristics within the totality.
There is no single classification that is the “right one” for all purposes. Rather, the appropriate classification and measurement for a particular phenomenon are based on three factors: 1) our objective in creating the classification or measurement, 2) the conceptual model within which we are operating, and 3) the availability of data, which in turn is determined by the technology we can apply. These three considerations – objective, conceptual model, and availability of data – will arise each time we need to create, select, or evaluate a measure or classification scheme.
We’re all familiar with the mean, or average, as a summary measure of a characteristic in a population. The focus of this lecture is the measurement of disease frequency, or more generally, of the frequency of an event, condition (e.g., fitness, immunity, longevity), or characteristic (e.g., right-handedness, usual diet, or exposure to environmental pollution). So the objective is to measure frequency of something in a population. Our conceptual model and the availability of data will depend on the specific characteristic or condition of interest.
Speaking of “capturing the phenomenon”, let’s see if we can do that with this schematic representation of a population, that includes the “essential” aspects for our purpose. Here is a population of 200 women, men, and children.
Suppose that during the first month of observation, one of the people becomes ill with H1N1 influenza (“swine flu”). The new case is indicated by a bold red circle.
During the next month, another person develops the disease (another bold red circle). (The previous case is indicated by a faint gray circle.)
Next month, another new case.
And now two more new cases.
And another.
And another. Got the picture? OK. Now, how might we quantify the frequency of disease in this population? What are the essential attributes of what we have been watching? One attribute is the number of cases. Another is the number of months. So our measure will have the number of cases and the number of months.
Let’s replay those slides quickly and add a few more?
The first case occurred during month 1.
There was another (new) case in month 2.
1 new case in month 3, for a total of 3 cases.
2 new cases in month 4. Does this indicate an increase in the rate?
1 new case in month 5
1 new case in month 6, for a total of 7 cases. Notice the empty circle in the lower right – the person at that location developed the disease in month 2 but is no longer in the population, due to death or out-migration. Does that make a difference for our count?
That depends on our objective. The fact that the person is no longer in the population does not really change the occurrence of the disease, though it does change the number of people in the population who have the disease. We will be interested in the latter concept later in the lecture. For now, though, our objective is disease occurrence, so we ignore what happens to people after they develop the disease.
Another new case, another exit – 8 cases in 7 months so far.
Two more new cases, making 10 in 8 months.
And two new more cases (and one exit), for a total of 12 new cases over 9 months.
So we have observed the population for 9 months. The number of new cases we observed (12) does provide a measure of frequency. In some circumstances, such as the SARS outbreak, the count would be sufficient for some purposes. But if the disease continues to occur, that measure has a limitation. The value will change according to the length of time that we observe the population. What else can we do? How about if we estimate the rate of occurrence of disease?
In some months we saw two new cases; in most months, though, there was one new case. So the rate of occurrence of new cases during the 9 months must be somewhere between 1 case/month and 2 cases/month. What would be a good measure of disease occurrence?
One familiar way is to divide the number of cases by the length of the time interval. That gives us an average rate of new cases. With 12 cases in 9 months, that rate is 1.33 cases/month.
We measure the speed of a vehicle we are driving not by the odometer alone (distance traveled) but by the distance divided by the time it took us to cover that distance. In more time we expect to cover more distance, so the distance alone is not a good measure of the speed of the car (it would be an adequate measure if we always drove for the same length of time, but that is not how driving works). So we measure a car’s average speed as the ratio of the distance traveled to the length of time we have been driving. Similarly, we can measure the average rate of occurrence of new cases as the ratio of the number of cases that have occurred to the length of time during which they occurred. In this case, that rate is 12 cases / 9 months, or 1.33 cases/month.
However, does this rate meet our needs for a measure of disease occurrence in populations?
Not really. The population we observed had 200 people in it. What if we had been observing a larger population under exactly the same conditions. If the “force of morbidity” were the same in the larger population, would we not expect to have observed more cases per unit time? If so, then this rate is not quite what we need.
Since the number of cases generally depends on the size of the population, it makes sense to divide not only by the length of the time interval but also by the size of the population at risk – the number of people who were available to develop a disease and the time period for which they were at risk. We refer to the ratio of the number of cases to the combination of population and time as the incidence rate. The incidence rate serves to quantify the frequency of disease occurrence in a population per person and per unit time.
As we have derived it here, the incidence rate is an average rate, in the sense that we have ignored the possibility that the rate may have been changing from month to month. For the present we are assuming that the fluctuations we observed are not meaningful for our purposes. We are assuming that there is some underlying constant rate of disease occurrence, and our average rate provides an estimate of that constant rate. But the rate could change over time, for example, by season.
So the incidence rate is constructed from a measure of population-time, a combination of population size and time at risk. Since the size of the population at risk often changes over time, we estimate population-time as the summation of how many people were at risk for how long. Those of you who know calculus can think of population-time as the area under the graph of number of people at risk as a function of time.
In practice, we usually estimate the total population time at risk in one of three ways:
Method 1: Add up each person’s time. If we know the amount of time that each member of the population was at risk, we can take the sum of those times, e.g., 3 months for person A + 2 months for person B + 2.5 months for person C, etc..
Method 2: Add each time’s people. If we know the size of the population at risk during each small segment of time, then we can add up these sizes, e.g., 500 people in year 1 + 700 people in year 2 + 600 people in year 3.
Method 3: Multiply average time by average number of people. If we do not have such detailed information but we know the average size of the population, we can estimate population time as the average size of the population at risk multiplied by the number of time units in the risk interval, e.g., 600 people x 3 months.
This slide illustrates Method 2 with our little population example. As a first approximation to the total population-time over 9 months, we could simply add up the number of people at risk (200+199+198+197+195+194+193+192+190), which is 1,758 person-months, or 146.5 person-years. This procedure assumes that once a person becomes a case s/he is no longer at risk to become another case), since we have removed that person from the population at risk for subsequent months. That is a reasonable assumption for diseases like H1N1 influenza or coronary heart disease, for example. The disease is regarded as having occurred one time, even if different clinical manifestations and even recurrences occur at various times thereafter. One shortcoming of the above calculation, though, is that it ignores the fact that a person who becomes a case is no longer at risk for the rest of that month, either.
We can improve our approximation by adjusting for the fact that once a person becomes a case, s/he is no longer at risk for the rest of the month. If we suppose that cases develop randomly during the months, it is approximately equivalent to treat them as though each case occurred in the middle of the month. In that event, each person who becomes a case is at risk for a half-month, on average. So the population time at risk during the first month is 200 people less 0.5 for the one person who became a case, which equals 199.5. The result is the same as counting the 199 people who did not become cases and adding 0.5 for the person who did become a case. Making this adjustment to the person-time for each month gives us an estimate that is smaller by the number of cases times 0.5 months.
If we do not know the number of people at risk in each small interval during the follow-up period, or if it is too tedious or not worth the effort to take the sum, we can use Method 3. To do this we can estimate the size of the population at risk by taking the average of the size at the beginning and the size at the end, and then multiplying this average size by the length of the time period. This method assumes that the number of cases is about the same in each month.
An equivalent way to estimate the average population size is to take the starting population and subtract half of the number of cases, since if we assume that cases occurred evenly during the period, each case was at risk for, on average, half of the period. So we take the 200 people we started with and subtract a half-person for each of the 12 cases. The result (194) is the approximate population size, which we then multiply by the length of the time period.
So we can proceed as if only the non-cases were followed up and then add in some person-time for the people who became cases or we can proceed as if everyone in the population were followed for the whole period but then reduce this for the people who became cases. If you want to go to the mezzanine in a hotel where the elevator stops only at the ground and first floors, you can take the elevator to the ground floor and walk up to the mezzanine or take the elevator to the 1st floor and walk down. So there are two equivalent ways in our numerical example.
All of these methods of estimating population-time assume that people are no longer at risk once they become a case. For some events we may want to regard people as still at risk of experiencing another event (e.g., experiencing a minor fall). In that case people could be regarded as being at risk during the entire period, so we would not need to reduce the size of the population at risk for the number of cases. But for various reasons we might want to define the event as “a first fall”, in which case we would proceed as before.
So our (average) incidence rate, sometimes called “incidence density” is defined as the number of new cases divided by population-time (the average size of the population at risk multiplied by the length of the time interval). The incidence rate tells us how rapidly cases have been occurring in the population. Note that when we are estimating incidence in an open population rather than a fixed cohort, we generally use the midpoint population as the average population at risk. So if there were 172,570 cases of lung cancer during 2005, and the population estimate for July 1st of that year is 280,000,000 people, then the (unadjusted) incidence of lung cancer is estimated as 0.000616 per year, or 61.6 per 100,000 person-years. Although “per year” is frequently omitted, the number is ambiguous without units (e.g., a rate can be expressed “per month during 2005”).
Why have two terms (incidence rate, incidence density) for the same measure? Terminology used in epidemiology varies by person, place, and time. Different authors use different terms, different schools use different terms, and the terms in use change over time. We need to know the synonyms so we can understand what we hear and read. “Incidence rate” and “incidence density” both refer to the same measure.
What if change our objective slightly and ask a different question? Instead of asking for the rate of occurrence, let us ask what proportion (of the population at risk) is affected after a certain time interval, say 5 months. For example, we might ask what proportion of women who become pregnant obtain prenatal care by the end of their fifth month of pregnancy.
In month 1, there was one new case.
In month 2, a second new case occurred.
By the end of month 3, three new cases have occurred.
Two new cases occurred in month 4, bringing the count to 5.
And in month 5 one more new case occurred, so that at the end of 5 months the proportion of the population that was affected was 6 cases in 200 people, or 0.03. If we prefer not to use decimals, or at least would rather have fewer decimal digits, we can express this proportion as a percentage (3%), which is the same as “per 100”, or we can express the proportion as a number per 1,000, as 30 per 1,000. These expressions are simply alternate ways of expressing the same value, and it is largely a matter of personal preference (or sometimes convention) which variant to use.
It is important, though, that we indicate the location of the decimal point, so that if we wish to write 0.03 as 30, we had better write it as 30 per 1,000 or 30/1,000. It is also important that we indicate the time interval. After all, this proportion was different after each month passed, so if we don’t specify “5 months” then the 0.03 is quite ambiguous.
So when we report incidence, “time is of the essence. For an incidence rate, it is essential to state the unit of time. For an incidence proportion, it is essential to give the length of the time interval.
The measure we have just derived is called incidence proportion or cumulative incidence. It’s formula is simply the number of new cases (regardless of what happened to them after they developed the disease) divided by the size of the population at risk. Here, we do not have to divide by the time interval, since we are not defining a rate of occurrence per unit time, but rather a summation of what has happened over a time interval. As noted, we do need to specify the length of the time interval.
The term “risk” is typically used to refer to the probability that an (adverse) event will occur. The average risk for a member of a cohort during a specified period if time is conveniently estimated by the incidence proportion (IP) for that cohort, assuming that the outcome is known for all cohort members at the end of the period. Because the average risk and IP have the same numerical value, the terms are often used interchangeably.
Since the IP measures the accumulation of cases over time, it is directly related to the rate at which cases are occurring. If the incidence rate (IR) is constant and each cohort member can become a case only once (or we define a “case” as the first event for a person), then the IP is approximately equal to the IR multiplied by the length of the time period. The approximation is very close when the IP is small (e.g., less than 0.10). When IP is not small, then the proportion of the cohort still at risk declines noticeably as cases occur. If IR is constant, then ln(1 – IP) = –IR*t, or IP = 1 – exp(-IR*t).
The availability of two kinds of incidence measures - incidence rate and incidence proportion (incidence density and cumulative incidence - is sometimes a source of confusion and puzzlement about which one should be used in a given situation. Perhaps the simplest way to make the distinction is to keep in mind the word “cumulative” in cumulative incidence. The cumulative incidence is cumulative in the sense that it quantifies the situation at the end of the follow-up period. The incidence rate quantifies how rapidly cases are occurring during the follow-up period. The accumulation of all those occurrences is expressed in cumulative incidence. Incidence rate measures the process; cumulative incidence measures the end result.
This distinction suggests some ways for choosing between ID and CI. If the “risk period” – the period during which the population under study remains at risk – is long (e.g., adult cancers), then most studies will cover only a part of the risk period, and the length of the follow-up period will vary from study to study. It will therefore be much easier to compare incidence rates across different studies than it will be to compare incidence proportions, because the longer the follow-up period, the larger the incidence proportion (cumulative incidence) will tend to be.
In contrast, if the risk period is short (e.g., food poisoning after a contaminated meal), we are often more interested in comparing incidence proportions for groups of people according to the food they ate than in comparing their rates of disease per hour. Among other things, the rate will change during the time since exposure, so that an average will be a very inadequate summary. In fact, if we lengthen the follow-up time to include a longer period even though the number of new cases has become very small, the incidence rate may become tiny. The cumulative incidence, however, will change little.
When the incidence rate is low, the population at risk changes only slightly. So incidence proportion is approximately equal to the incidence rate multiplied by the period of observation, IP ~= IR x t.
When the incidence rate is high, then the population at risk diminishes noticeably, so that a constant incidence rate yields fewer cases and incidence proportion cannot rise as rapidly.
Incidence rate and incidence proportion are the two principal concepts that epidemiologists use to describe and analyze the frequency of occurrence of a disease or other health-related condition or characteristic in a population. Much of epidemiology deals with deaths, though, since counts of deaths are often more accurate than counts of cases of a disease that no one is paying special attention to. The relation between incidence (events) and mortality (deaths) depends on how often and/or rapidly the disease is fatal. One measure of this relation is the proportion of cases who die in a given time, the case fatality rate (it’s really a case fatality proportion, but proportions are often called “rates” in epidemiology for sociological reasons undoubtedly).
Mathematically, the case fatality rate is completely analogous to a “cumulative incidence of death” where the population at risk consists of existing cases. There is also an analog to an “incidence rate of death” among cases, which is called the termination rate. Conveniently, the reciprocal of the termination rate is average survival time of an existing case.
Death is obviously important both from a public health perspective, and because it is also important from a legal perspective, counts of deaths in themselves (i.e., death from any cause) are often more readily available and more accurate than other population statistics. So demography and descriptive epidemiology make extensive use of death rates (also called mortality rates). Do note the important distinction between fatality rate and mortality rate. A “fatality rate” is a proportion of people who have a condition; a mortality rate is a rate[i.e., per unit time] for a population of people who may or may not have various conditions).
A mortality rate is essentially an “incidence density” of death in some population. Thus, the numerator consists of deaths (i.e., events) and denominator consists of population-time. Since the most common type of mortality rate is an annual mortality rate, expressed per 1,000 person-years, the unit of time is often omitted. However, this is ambiguous, since any annual rate can also be expressed per month, per week, etc.
By convention, an estimate of the mid-year population is used as an estimate of the average population during the year. If the size of the population is not changing during the year, it makes no difference whether the population size from the beginning, middle, or end of the year is used – they will all be the same. If the population is steadily increasing or steadily decreasing, then the mid-year population will provide a better approximation to the average number of people at risk during the year than will other convenient choices.
Since multiplying the mid-year population by the interval length of one-year does not change its numerical value, the formula is written with just the mid-year population in the denominator. However, if the number of deaths is small one may want to compute an annual average over several years to obtain a more precise estimate. In that case the numerator will contain the total number of deaths occurring during the several-year period, and the denominator will contain an estimate of the entire population time, such as the mid-period population estimate multiplied by the number of years.
As noted, mortality data are often more available and are therefore often used when we might really prefer to analyze data on incidence. The problem, of course, is that unless a disease kills everyone who gets it and in about the same length of time, then differences in death rates may reflect differences in fatality rather than in the occurrence of the disease itself. So a group may appear to have a greater risk of a disease when the real reason their mortality rate is higher is that they have less access to effective medical care.
An additional problem is that the cause of death listed on the death certificate is often inaccurate. Many death certificates are filled out by doctors who have not been treating the patient and who may have fairly little information on which to base their decision on the cause of death. Also, many people when they die have multiple potentially fatal conditions, so that it can be difficult to decide which one was really responsible for the fatal outcome. So whereas the fact that someone has died is generally measured with high accuracy, the cause of death is somewhat error prone, especially for some causes of death.
Another proportion often used in epidemiology to measure disease frequency is called prevalence (and, mercifully, only that). Prevalence is the proportion of a population at risk that is affected at a given time, which could be at a given moment (which is called “point prevalence”) or over a period of time (“period prevalence”). The distinction between the two is often unclear, however, and usually one just says “prevalence”.
A critical distinction between cumulative incidence and prevalence is that incidence counts only newly occurring cases, whereas prevalence counts all cases that exist at the moment (or for “period prevalence”, at any time during the period).
Let’s estimate the prevalence in our example population. Suppose at a point in time, the situation is as shown in the slide – one person has died, one new case has just occurred, and five members of the population have developed the disease and continue to live with it.
Now, a new case occurs, and an existing case dies.
In the next month, two new cases occur, and no one dies.
Two more new cases, one more death.
So looking at the population at the present month, what is the prevalence?
First, how many people in the population have the disease?
Second, how large is the eligible population (the people eligible to have the disease, which for the moment we are assuming could have happened to anyone – women, men, children)?
Nine people have the disease at present, and since the cases who died are not in the population any longer, the population at risk has 197 people. So the prevalence is 9/197, which of course we can write as 0.046, 4.6%, or 46/1,000, or any numerically equivalent expression. Notice that in contrast to incidence, the population at risk for prevalence does include the people living with the disease since they are at risk to have the disease.
Let’s consider several fine points.
First, who comprises the population at risk? What about a disease that occurs only in women or only in men? What is the population at risk for developing or having prostate cancer? Clearly, men. What about the population at risk for developing breast cancer? Women – and men, though the occurrence of breast cancer is so much less frequent in men that we would generally want to compute separate measures of breast cancer for women and men, rather than an average for both sexes combined.
What about endometrial cancer? Clearly women, but if they have had a hysterectomy, then they are not at risk of this disease. So ideally we would use in our denominator for incidence or prevalence only women with a uterus. Of course, we often don’t know how many women in a population have had a hysterectomy. Here, the third consideration we listed in the beginning – availability of data – may force us to include women in the denominator even if they have had a hysterectomy and are no longer biologically “at risk”. The treatment of women who have had a hysterectomy in the course of being treated for endometrial cancer depends on the study question and whether we can know this information.
What about differences in genetic susceptibility. After all, the differences we just discussed for women and men were certainly genetically based What do we do about people who due to other aspects of their genetic make-up are not in fact at risk to develop the disease? In principle we would want to omit them from the denominator. That is what we do, after all, when we omit women from the denominator for prostate cancer or when we calculate separate breast cancer incidences for women and men. But most of the time we do not have information on genetic susceptibility, either in terms of being “at risk” or “not at risk” or in terms of different levels of risk. So the “availability of data” consideration generally forces us to include all people whom we believe might be susceptible in the denominator.
Here is another fine point. How do we measure time when we construct an incidence rate? Are 10 people followed for 10 years always equivalent to 100 people followed for 1 year? Not necessarily.
It is possible that the level of risk changes during the time a person is being followed, for example, because the person is getting older or becoming increasingly susceptible. If we know that people with different lengths of follow-up are not equivalent, then we would not want to simply average their disease experience together. Instead, we would analyze the incidence separately for each duration of follow-up.
Also, if the incidence is changing over calendar time, then rather than estimate an overall incidence rate (essentially, an average), we might prefer to estimate an incidence rate for each part of the time period.
Another fine point – or again, maybe not such a fine point – is that we must remember to state the units in which we are measuring time and the location of the decimal point, unless they are clear from the context. After all, if we just say “120 per 100,000”, that could mean 120 per 100,000 person-years (as it often would). But we could write the same incidence rate as “10 per 100,000 person-months”, and few people would realize that by 10 per 100,000 (months) we meant the same rate as 120 per 100,000 (years). There are serious hazards from a failure to specify units, as the following slide illustrates.
“You can never, never take anything for granted.”
That was the lesson drawn by Noel Hinners, vice president for flight systems at Lockheed Martin Astronautics in Denver, Colorado concerning the loss of the Martian Climate Orbiter due to the Lockheed Martin spacecraft team’s reporting measurements in English units, whereas the orbiter’s navigation team at the Jet Propulsion Laboratory (JPL) in Pasadena, California assumed the measurements were in metric units.
Here, a failure to specify the units of some measurements led to the loss of a multi-billion dollar spacecraft. So please, always specify the units in which you are expressing a measurement!
In order for a case to exist, it must first occur. So in any population there must be some relation between incidence and prevalence. In general, the greater the incidence, the greater the prevalence. However, the prevalence also depends on the duration of the disease, since cures, deaths, and out-migration all remove existing cases from the population. In addition, existing cases may migrate into the population, so it is possible for prevalence to increase even without an increase in incidence.
A traditional diagram (I’ve used one for two decades, and Leon Gordis has one in his textbook) for illustrating the relation among incidence, prevalence, mortality, cure, and migration is a bathtub. However, the bathtub analogy is deficient in that the rate of flow at the faucet (representing incidence) is not proportional to the (unseen) size of the reservoir (representing the population at risk).
My current physical analogy for the relation of incidence and prevalence is the popcorn maker. The unpopped kernels placed in the cooker are the “population at risk”. An event (case) is the popping of a kernel. A popped kernel still in the cooker is a prevalent (existing) case. And kernels removed from the cooker represent deaths, cures, and outmigration (inmigration of cases would be represented by adding popped kernels to the cooker – which you might do if it were the end of the night and you wanted to empty and clean one of two cookers, keeping the other one available for late-comers).
[I’m grateful to Linda Robertson for correcting my spelling of “kernel”!]
Turning up the heat increases the “incidence”. Opening the plastic door that holds the popcorn inside decreases “prevalence”. Kernels that do not have enough moisture in them to pop are “not susceptible”. Kernels in which steam is building up but which have not yet popped are at very high risk (or, if you define the event as the vaporization of the moisture in the kernel, such kernels are in a presymptomatic stage).
As mentioned last week, populations are diverse (different from each other) and heterogeneous (different within themselves). Yet we typically want to make comparisons across populations. Some of the differences between populations are of interest to us, but others are actually a distraction. For example, if we want to compare the death rates across several populations, differences in their age structure (which strongly influences death rates) could well interfere with the objective of our comparison. Developing countries, for example, tend to have much younger age distributions than countries that have been industrialized for many years. Because of their different age distributions, the death rate for a developing country may well be lower than that for an industrialized country, even if people of the same age have a lower death rate in the industrialized country.
Therefore, when there are differences between populations that may distort or cloud our comparison of interest, we often adjust that comparison to take account of the population differences that are not of interest at the moment. One method of doing this, widely used for presenting death rates, is called standardization. The strategy for standardization is basically straightforward: divide each population into subgroups defined by the factor(s) to be adjusted for. Estimate the death rate separately for each subgroup. Then take a weighted average across all of the subgroups. The key is to use the same set of weights for each population.
For the standardization to be straightforward, however, one does need to remember how weighted averages work. Here is a familiar example. Suppose you want to compare the fuel efficiency of two cars. Of course, any car will get better fuel efficiency in Interstate driving (e.g., 0.050 LpK) than other driving (e.g., 0.100 LpK). So if we want to compare fuel efficiency from information on kilometers driven and fuel consumed, we would make separate comparisons for Interstate and non-Interstate driving.
If we wanted to have a single number for each car, we could construct a weighted average of liters of petrol per kilometer driven (LpK) for each type of driving. [I used to use an example based on miles per gallon, but an EPID600 student pointed out to me that my algebra was incorrect, so I had to change to gallons per mile. Since we’re now a school of global public health, I thought I should switch to metric units. That is the reason the example uses liters (L) of petrol per kilometer (Km).]
So to summarize fuel efficiency by a single number for each vehicle, we construct a weighted average that combines fuel consumption in highway driving with fuel consumption in city driving.
Juan drives a Suburu 800 km on Interstate highways and 200 km on other roads. His car uses 0.050 LpK on Interstates and 0.100 LpK on other roads, using a total of 60 liters of petrol on the 1000 km trip, an average of 0.060 LpK (60 L / 1000 km). His overall LpK of 0.060 can be expressed as a weighted average:
(800/1000) x 0.050 LpK + (200/1000) x 0.100 LpK
= 0.80 x 0.050 LpK + 0.20 x 0.100LpK = 0.060 LpK
In fact, any number can be regarded as a weighted average. In this case we weight each type of fuel efficiency by the proportion of that type of driving.
Shizu’s overall fuel efficiency is also a weighted average, but in this case she drove a larger proportion of her trip on non-Interstate roads. Shizu drives her Mazda for 200 km on Interstate and 800 km on other roads. She uses 0.045 LpK in Interstate driving and 0.080 LpK on non-Interstate. So her total fuel consumption is 73 liters, or 0.073 LpK. In spite of the fact that her car was more efficient in Interstate driving and also in non-Interstate driving, Shizu used more fuel for the same trip.
To understand how this can be, let’s express Shizu’s overall LpK as a weighted average:
(200/1,000) x 0.045 LpK + (800/1,000) x 0.080 LpK
= 0.20 x 0.045 LpK + 0.80 x 0.080 LpK =0.073 LpK
This expression makes clear that although the numbers for LpK are smaller for Shizu’s Mazda, she is driving more of her distance at her lower fuel efficiency.
When we go to compare their fuel efficiency, we have a problem. The overall LpK figure for Juan reflects both the fuel efficiency of his car but also the fact that he drove a larger proportion of his trip on Interstates than did Shizu.
In this table, the distances driven have been replaced by the percentage distributions for the total kilometers driven by each driver. Juan’s overall LpK (060) reflects the fact that he made 80% of his trip was on Interestates. Shizu’s 0.073 LpK, in turn, reflects the fact that she drove only 20% of her trip on Interstates. So compared to Juan, her car looks less favorable than it should, because although her fuel efficiency is better both on Interstates and on other roads, she drove most of her trip at her relatively worse fuel efficiency, whereas Juan drove most of his trip at his relative better fuel efficiency.
If we want to make a “fair” comparison, or a comparison that “standardizes” for the type of driving, we would recompute the weighted averages by using the same set of weights for both cars. In the above table I chose 60% Interstate, 40% non-Interstate. These “standardized” summaries result in a different comparison. Other weights could have been chosen.
Which comparison is correct? Both are correct. The first one (“Total”) shows what really happened, but that was influenced by trip composition. The second comparison (“Standardized”) is hypothetical, but it removes the influence of trip composition. Each answers a different question.
This slide illustrates the computation of the standardized average liters per kilometer. It is clear that the choice of weights will affect the comparison, even if we use the same weights for both cars. The higher the weight for highway driving, the more the overall average will move in favor of Shizu’s Mazda. The higher the weight for city driving, the more the overall average will shift in favor of Juan’s Suburu.
Which weights should one use? As with other decisions about measures, the choice depends on our objective, our conceptual understanding of the phenomenon, and the availability of data. Convention also has a role. If we want to compare our results to those that others have published, using the same set of weights (often referred to as the “standard population”) as did the other studies improves the comparability across studies. If the data are sparse, we may want to avoid assigning large weights to imprecise estimates. This issue and related topics are covered in standard textbooks as well as in the Evolving Text.
And finally, to remind us that learning was fun, here are a few more “famous last words”.
“I'm just glad it'll be Clark Gable who's falling on his face and not Gary Cooper.” (attributed to Gary Cooper on his decision not to take the leading role in the classic epic of the U.S. Civil War, “Gone with the Wind”).