Program on Mathematical and Statistical Methods for Climate and the Earth System Opening Workshop, Projecting Health Impacts of Climate Change: Embracing an Uncertain Future - Howard Chang, Aug 23, 2017
Global climate change affects human health most notably by increasing the frequency and intensity of dangerous heat waves, wildfires and hurricanes. In addition to extreme weather events, climate change can also lead to a myriad of persistent environmental changes that impact public health. Health impact assessment refers to the analytic framework for evaluating how a policy or program affects population health. It is frequently applied in climate and public health research to quantify future health and economic burdens attributable to various consequences of climate change.
Performing health impact assessment entails the integration of various data. For projecting future climate-related health impacts, analyses require three sources of information: (1) health effects of environmental exposures, (2) projections of future exposures, and (3) distributions of exposures and effects in the future population. Each information source is subject to uncertainty because of data availability and assumptions made for the future. Climate research is highly interdisciplinary, bringing together tremendous amount of data, theory, and modeling efforts to provide timely knowledge for one of the most pressing issues of our time. Statistical modeling techniques and probabilistic reasoning can plan an important role in ensuring these findings are informative, accurate, and reproducible.
This presentation will discuss recent development in statistical methods for quantifying health impacts of climate change, as well as related open problems in environmental epidemiology and exposure assessment.
Climate change affects human health in several different ways, and one important effect is through changes in air pollution. Here I will discuss the state of science currently on how climate change affects air pollution, and the resulting effects on human health, drawing from the broad literature and highlighting studies from my lab. In particular, I will discuss: 1) The effects of air pollution on health globally, 2) How climate change affects air pollution, 3) The effects of climate change on air pollution and health globally, and 4) The co-benefits of greenhouse gas mitigation for air pollution and health, globally and in the US.
Evidence suggests that exposure to elevated concentrations of air pollution during pregnancy may increase risks of birth defects and other adverse birth outcomes. While current regulations put limits on total PM2.5 concentrations, there are many speciated pollutants within this size class that likely have distinct effects on perinatal health. However, due to correlations between these speciated pollutants it can be difficult to decipher their effects in a model for birth outcomes. To combat this difficulty we develop a new multivariate spatio-temporal Bayesian model for speciated particulate matter using dynamic spatial factors. These spatial factors can then be interpolated to the pregnant women’s homes to be used in a birth outcomes model. The model for birth outcomes allows the impact of pollutants to vary across different weeks of the pregnancy in order to identify susceptible periods. The proposed innovative methodology is implemented using pollutant monitoring data from the Environmental Protection Agency and birth records from the National Birth Defect Prevention Study.
Work in collaboration with Kimberly Kaufeld, Brian Reich, Amy Herring, Gary Shaw and Maria Terres.
A presentation made at the 2015 NC BREATHE Conference by Jason West, PhD of University of North Carolina - Chapel Hill. Sponsored by Clean Air Carolina and partners, the 2015 NC BREATHE Conference was held on March 27, 2015 in Raleigh, NC to bring together air quality researchers, medical and public health professionals, and policymakers to share the latest research on the health impacts of air pollution, the positive health outcomes related to clean air policy-making, and the resulting economic benefits.
Climate change affects human health in several different ways, and one important effect is through changes in air pollution. Here I will discuss the state of science currently on how climate change affects air pollution, and the resulting effects on human health, drawing from the broad literature and highlighting studies from my lab. In particular, I will discuss: 1) The effects of air pollution on health globally, 2) How climate change affects air pollution, 3) The effects of climate change on air pollution and health globally, and 4) The co-benefits of greenhouse gas mitigation for air pollution and health, globally and in the US.
Evidence suggests that exposure to elevated concentrations of air pollution during pregnancy may increase risks of birth defects and other adverse birth outcomes. While current regulations put limits on total PM2.5 concentrations, there are many speciated pollutants within this size class that likely have distinct effects on perinatal health. However, due to correlations between these speciated pollutants it can be difficult to decipher their effects in a model for birth outcomes. To combat this difficulty we develop a new multivariate spatio-temporal Bayesian model for speciated particulate matter using dynamic spatial factors. These spatial factors can then be interpolated to the pregnant women’s homes to be used in a birth outcomes model. The model for birth outcomes allows the impact of pollutants to vary across different weeks of the pregnancy in order to identify susceptible periods. The proposed innovative methodology is implemented using pollutant monitoring data from the Environmental Protection Agency and birth records from the National Birth Defect Prevention Study.
Work in collaboration with Kimberly Kaufeld, Brian Reich, Amy Herring, Gary Shaw and Maria Terres.
A presentation made at the 2015 NC BREATHE Conference by Jason West, PhD of University of North Carolina - Chapel Hill. Sponsored by Clean Air Carolina and partners, the 2015 NC BREATHE Conference was held on March 27, 2015 in Raleigh, NC to bring together air quality researchers, medical and public health professionals, and policymakers to share the latest research on the health impacts of air pollution, the positive health outcomes related to clean air policy-making, and the resulting economic benefits.
Why is it so hard to reduce household air pollution among the very poor?Leith Greenslade
What cooking technologies can deliver lasting reductions in exposure to household air pollution among the very poor? This is THE question. Learn more from four experts, including Neil Schluger and Darby Jack (Columbia University), Alison Lee (Icahn School of Medicine Mt Sinai) and Joshua Rosenthal (NIH), on the latest research and the most promising technologies, especially the new efforts to reroute government fuel subsidies from the middle class to the very poor (e.g. India Give it Up Campaign for LPG).
Estimating the Probability of Earthquake Occurrence and Return Period Using G...sajjalp
In this paper, the frequency of an earthquake occurrence and magnitude relationship
has been modeled with generalized linear models for the set of
earthquake data of Nepal. A goodness of fit of a statistical model is applied for
generalized linear models and considering the model selection information
criterion, Akaike information criterion and Bayesian information criterion,
generalized Poisson regression model has been selected as a suitable model
for the study. The objective of this study is to determine the parameters (a
and b values), estimate the probability of an earthquake occurrence and its
return period using a Poisson regression model and compared with the Gutenberg-Richter
model. The study suggests that the probabilities of earthquake
occurrences and return periods estimated by both the models are relatively
close to each other. The return periods from the generalized Poisson
regression model are comparatively smaller than the Gutenberg-Richter
model.
Air Pollution and Multiple Sclerosis: An Ecological StudyDataNB
The objective of this research is to explore the link between air pollution and multiple sclerosis (MS). MS is a chronic progressive neurological disease of young adulthood that results in significant physical and cognitive disability. Rates of MS in NB are among the highest in the world. Greater exposure to air pollution has been previously implicated as risk factor and basic science studies demonstrate that pollutants can cross the blood brain barrier. We previously conducted a prevalence study and identified regional variation in MS prevalence in NB. To explore this geographic variability, we compared air pollution levels and MS prevalence using data housed at NB-IRDT. We stratified MS cases by geography, into one of the thirty-three Health Council Communities (HCCs), and assigned long-term air pollution levels (i.e. particulate matter <2.5µm (PM2.5), nitrogen dioxide (NO2), sulphur dioxide (SO2) and ozone (O3)), from the Canadian Urban Environmental Health Research Consortium (CANUE) to each HCC. Average pollutant levels were all below established Canadian air quality standards. We found PM2.5 was positively associated with MS prevalence. Our results offer additional evidence for a link between ambient PM2.5 and MS, even in areas with low air pollution levels such as NB.
Particulate matter from open lot dairies and cattle feeding: recent developmentsLPE Learning Center
The full proceedings paper is at: http://www.extension.org/72731
The research community is making good progress in understanding the mechanical, biochemical, and atmospheric processes that are responsible for airborne emissions of particulate matter (PM, or dust) from open-lot livestock production, especially dairies and cattle feedyards. Recent studies in Texas, Kansas, Nebraska, Colorado, California, and Australia have expanded the available data on both emission rates and abatement measures. Although the uncertainties associated with our estimates of fugitive emissions are still unacceptably high, we have learned from our recent experience with ammonia that using a wide variety of credible measurement techniques, rather than focusing on one so-called “standard” technique, may be the better way to improve confidence in our estimates. Whereas the most promising control measures for gaseous emissions continue to be dietary strategies with management of corral-surface moisture a close second for particulate matter, corral-surface management and moisture management play comparable roles, depending on the mechanical strength of soils and the availability of water, respectively. The cost per unit reduction of emitted mass attributable to these abatement measures varies as widely as the emissions estimates themselves, so we need to intensify our emphasis on process-based emissions research to (a) reduce the variances in our emissions estimates and (b) mitigate the contingency of prior, empirically based estimates. As a general rule, although cattle feedyard emission factors may be thought a reasonable starting point for estimating emissions from open-lot dairies, such estimates should be viewed with suspicion.
A typical problem in environmental epidemiological studies concerns environmental exposure assessment. In this talk, we will discuss challenges to environmental exposure assessment and we will showcase and discuss statistical methods that have been developed to obtain estimates of environmental exposure (e.g. air pollution, temperature). Further we will discuss whether and how uncertainty in the environmental exposure has been and can be incorporated in health analyses.
Perspectives of predictive epidemiology and early warning systems for Rift Va...ILRI
Presentation by MO Nanyingi, GM Muchemi, SG Kiama, SM Thumbi and B Bett at the 47th annual scientific conference of the Kenya Veterinary Association held at Mombasa, Kenya, 24-27 April 2013.
Why is it so hard to reduce household air pollution among the very poor?Leith Greenslade
What cooking technologies can deliver lasting reductions in exposure to household air pollution among the very poor? This is THE question. Learn more from four experts, including Neil Schluger and Darby Jack (Columbia University), Alison Lee (Icahn School of Medicine Mt Sinai) and Joshua Rosenthal (NIH), on the latest research and the most promising technologies, especially the new efforts to reroute government fuel subsidies from the middle class to the very poor (e.g. India Give it Up Campaign for LPG).
Estimating the Probability of Earthquake Occurrence and Return Period Using G...sajjalp
In this paper, the frequency of an earthquake occurrence and magnitude relationship
has been modeled with generalized linear models for the set of
earthquake data of Nepal. A goodness of fit of a statistical model is applied for
generalized linear models and considering the model selection information
criterion, Akaike information criterion and Bayesian information criterion,
generalized Poisson regression model has been selected as a suitable model
for the study. The objective of this study is to determine the parameters (a
and b values), estimate the probability of an earthquake occurrence and its
return period using a Poisson regression model and compared with the Gutenberg-Richter
model. The study suggests that the probabilities of earthquake
occurrences and return periods estimated by both the models are relatively
close to each other. The return periods from the generalized Poisson
regression model are comparatively smaller than the Gutenberg-Richter
model.
Air Pollution and Multiple Sclerosis: An Ecological StudyDataNB
The objective of this research is to explore the link between air pollution and multiple sclerosis (MS). MS is a chronic progressive neurological disease of young adulthood that results in significant physical and cognitive disability. Rates of MS in NB are among the highest in the world. Greater exposure to air pollution has been previously implicated as risk factor and basic science studies demonstrate that pollutants can cross the blood brain barrier. We previously conducted a prevalence study and identified regional variation in MS prevalence in NB. To explore this geographic variability, we compared air pollution levels and MS prevalence using data housed at NB-IRDT. We stratified MS cases by geography, into one of the thirty-three Health Council Communities (HCCs), and assigned long-term air pollution levels (i.e. particulate matter <2.5µm (PM2.5), nitrogen dioxide (NO2), sulphur dioxide (SO2) and ozone (O3)), from the Canadian Urban Environmental Health Research Consortium (CANUE) to each HCC. Average pollutant levels were all below established Canadian air quality standards. We found PM2.5 was positively associated with MS prevalence. Our results offer additional evidence for a link between ambient PM2.5 and MS, even in areas with low air pollution levels such as NB.
Particulate matter from open lot dairies and cattle feeding: recent developmentsLPE Learning Center
The full proceedings paper is at: http://www.extension.org/72731
The research community is making good progress in understanding the mechanical, biochemical, and atmospheric processes that are responsible for airborne emissions of particulate matter (PM, or dust) from open-lot livestock production, especially dairies and cattle feedyards. Recent studies in Texas, Kansas, Nebraska, Colorado, California, and Australia have expanded the available data on both emission rates and abatement measures. Although the uncertainties associated with our estimates of fugitive emissions are still unacceptably high, we have learned from our recent experience with ammonia that using a wide variety of credible measurement techniques, rather than focusing on one so-called “standard” technique, may be the better way to improve confidence in our estimates. Whereas the most promising control measures for gaseous emissions continue to be dietary strategies with management of corral-surface moisture a close second for particulate matter, corral-surface management and moisture management play comparable roles, depending on the mechanical strength of soils and the availability of water, respectively. The cost per unit reduction of emitted mass attributable to these abatement measures varies as widely as the emissions estimates themselves, so we need to intensify our emphasis on process-based emissions research to (a) reduce the variances in our emissions estimates and (b) mitigate the contingency of prior, empirically based estimates. As a general rule, although cattle feedyard emission factors may be thought a reasonable starting point for estimating emissions from open-lot dairies, such estimates should be viewed with suspicion.
A typical problem in environmental epidemiological studies concerns environmental exposure assessment. In this talk, we will discuss challenges to environmental exposure assessment and we will showcase and discuss statistical methods that have been developed to obtain estimates of environmental exposure (e.g. air pollution, temperature). Further we will discuss whether and how uncertainty in the environmental exposure has been and can be incorporated in health analyses.
Carl koppeschaar: Disease Radar: Measuring and Forecasting the Spread of Infe...
Similar to Program on Mathematical and Statistical Methods for Climate and the Earth System Opening Workshop, Projecting Health Impacts of Climate Change: Embracing an Uncertain Future - Howard Chang, Aug 23, 2017
Perspectives of predictive epidemiology and early warning systems for Rift Va...ILRI
Presentation by MO Nanyingi, GM Muchemi, SG Kiama, SM Thumbi and B Bett at the 47th annual scientific conference of the Kenya Veterinary Association held at Mombasa, Kenya, 24-27 April 2013.
Presentation by D Prabhakaran DM during the panel on 'Health Effects of Exposure to Air Pollution, as part of the CPR Initiative on Climate, Energy and Environment Clearing the Air Seminar Series. This event was organised in partnership with the Public Health Foundation of India (PHFI)
Pairwise Comparison of Daily Ozone Concentration in Tampa-St.Petersburg Regio...Kalaivanan Murthy
A statistical examination of ozone variation between days.
(more description coming soon)
Please visit this link for abstract, and R code: https://www.slideshare.net/KalaivananMurthy/pairwise-comparison-of-daily-ozone-concentration-in-tampastpetersburg-region-abstract-r-program
Managing Health and Disease Using Omics and Big DataLaura Berry
Presented at the NGS Tech and Applications Congress: USA. To find out more, visit:
www.global-engage.com
Michael Snyder is a Professor, Chair of Genetics and Director of the Stanford Center for Genomics and Personalized Medicine at Stanford University. In this presentation Michael discusses using omics and big data to predict disease risk and catch early disease onset.
Fluwitter : An Ontology-Based Framework for Formulating Spatio-Temporal Infl...Udaya Jayawardhana
Fluwitter is a Twitter based fully functional real time influenza monitoring system and it produces interpolated web maps showing the current influenza situation within the respective geographical region.
https://etd.ohiolink.edu/pg_10?0::NO:10:P10_ETD_SUBID:115789
264 volume 123 number 3 March 2015 • Environmental Health .docxeugeniadean34240
264 volume 123 | number 3 | March 2015 • Environmental Health Perspectives
Research | Children’s Health A Section 508–conformant HTML version of this article is available at http://dx.doi.org/10.1289/ehp.1408133.
Introduction
Autism spectrum disorder (ASD) is a
developmental disorder with increasing
reported prevalence worldwide (French et al.
2013). Although genetics plays a strong role
in ASD, evidence suggests that environmental
exposures, particularly in utero or during early
life, also affect ASD risk (Grønborg et al.
2013; Hallmayer et al. 2011; Quaak et al.
2013). However, no specific environmental
toxicant has been consistently associated with
increased risk of ASD.
Air pollution contains various toxicants
that have been found to be associated with
neurotoxicity and adverse effects on the fetus
in utero (Crump et al. 1998; Grandjean and
Landrigan 2006; Rice and Barone 2000;
Rodier 1995; Stillerman et al. 2008). Airborne
particles are covered with various contami-
nants, and have been found to penetrate the
subcellular environment and induce oxidative
stress and mitochondrial damage in vitro (Li
et al. 2003; MohanKumar et al. 2008). In
rodents, these particles also have been found
to stimulate inflammatory cytokine release
systemically and in the brain, and alter the
neonatal immune system (Hertz-Picciotto
et al. 2005, 2008; MohanKumar et al.
2008)—processes that have been implicated
in ASD (Depino 2013; Napoli et al. 2013).
Several studies have explored associations
of air pollution with ASD, using the U.S.
Environmental Protection Agency (EPA)
hazardous air pollutant models, distance to
freeway, or local models for specific pollutants.
These studies suggest increased odds of having a
child with ASD with higher exposures to diesel
particulate matter (PM) (Roberts et al. 2013;
Windham et al. 2006), several metals (Roberts
et al. 2013; Windham et al. 2006), criteria
pollutants (Becerra et al. 2013; Volk et al.
2013), and some organic materials as well as
closer proximity to a freeway (Volk et al. 2011).
Our goal was to explore the association
between ASD and exposure to PM during
defined time periods before, during, and
after pregnancy, within the Nurses’ Health
Study II (NHS II), a large, well-defined
cohort with detailed residential history. This
nested case–control study includes partici-
pants from across the continental United
States, and exposure was linked to monthly
data on two size fractions of PM.
Methods
Participants. The study population included
offspring of participants in NHS II, a
prospective cohort of 116,430 U.S. female
nurses 25–43 years of age when recruited
in 1989, followed biennially (Solomon
et al. 1997). NHS II participants originally
were recruited from 14 states in all regions
of the continental United States, but they
now reside in all 50 states. The study was
approved by the Partners Health Care
Institutional Review Board and complied
with a.
A great deal is happening in lupus-related research. This presentation will update participants on recent research developments and their impact on those affected by lupus. Dr. Petri will provide an overview of current lupus research and the prospects for the future of lupus treatments. Learn how to better manage your lupus and make knowledgeable decisions regarding your treatment plan.
Presented at 'Changing Perspectives: 1st International Conference on Transport and Health', London, 6 -8 July 2015
Haneen Khreis, Charlotte Kelly, James Tate, Roger Parslow, Karen Lucas
These are source references for information in the slideshow and script.
Similar to Program on Mathematical and Statistical Methods for Climate and the Earth System Opening Workshop, Projecting Health Impacts of Climate Change: Embracing an Uncertain Future - Howard Chang, Aug 23, 2017 (20)
Recently, the machine learning community has expressed strong interest in applying latent variable modeling strategies to causal inference problems with unobserved confounding. Here, I discuss one of the big debates that occurred over the past year, and how we can move forward. I will focus specifically on the failure of point identification in this setting, and discuss how this can be used to design flexible sensitivity analyses that cleanly separate identified and unidentified components of the causal model.
I will discuss paradigmatic statistical models of inference and learning from high dimensional data, such as sparse PCA and the perceptron neural network, in the sub-linear sparsity regime. In this limit the underlying hidden signal, i.e., the low-rank matrix in PCA or the neural network weights, has a number of non-zero components that scales sub-linearly with the total dimension of the vector. I will provide explicit low-dimensional variational formulas for the asymptotic mutual information between the signal and the data in suitable sparse limits. In the setting of support recovery these formulas imply sharp 0-1 phase transitions for the asymptotic minimum mean-square-error (or generalization error in the neural network setting). A similar phase transition was analyzed recently in the context of sparse high-dimensional linear regression by Reeves et al.
Many different measurement techniques are used to record neural activity in the brains of different organisms, including fMRI, EEG, MEG, lightsheet microscopy and direct recordings with electrodes. Each of these measurement modes have their advantages and disadvantages concerning the resolution of the data in space and time, the directness of measurement of the neural activity and which organisms they can be applied to. For some of these modes and for some organisms, significant amounts of data are now available in large standardized open-source datasets. I will report on our efforts to apply causal discovery algorithms to, among others, fMRI data from the Human Connectome Project, and to lightsheet microscopy data from zebrafish larvae. In particular, I will focus on the challenges we have faced both in terms of the nature of the data and the computational features of the discovery algorithms, as well as the modeling of experimental interventions.
Bayesian Additive Regression Trees (BART) has been shown to be an effective framework for modeling nonlinear regression functions, with strong predictive performance in a variety of contexts. The BART prior over a regression function is defined by independent prior distributions on tree structure and leaf or end-node parameters. In observational data settings, Bayesian Causal Forests (BCF) has successfully adapted BART for estimating heterogeneous treatment effects, particularly in cases where standard methods yield biased estimates due to strong confounding.
We introduce BART with Targeted Smoothing, an extension which induces smoothness over a single covariate by replacing independent Gaussian leaf priors with smooth functions. We then introduce a new version of the Bayesian Causal Forest prior, which incorporates targeted smoothing for modeling heterogeneous treatment effects which vary smoothly over a target covariate. We demonstrate the utility of this approach by applying our model to a timely women's health and policy problem: comparing two dosing regimens for an early medical abortion protocol, where the outcome of interest is the probability of a successful early medical abortion procedure at varying gestational ages, conditional on patient covariates. We discuss the benefits of this approach in other women’s health and obstetrics modeling problems where gestational age is a typical covariate.
Difference-in-differences is a widely used evaluation strategy that draws causal inference from observational panel data. Its causal identification relies on the assumption of parallel trends, which is scale-dependent and may be questionable in some applications. A common alternative is a regression model that adjusts for the lagged dependent variable, which rests on the assumption of ignorability conditional on past outcomes. In the context of linear models, Angrist and Pischke (2009) show that the difference-in-differences and lagged-dependent-variable regression estimates have a bracketing relationship. Namely, for a true positive effect, if ignorability is correct, then mistakenly assuming parallel trends will overestimate the effect; in contrast, if the parallel trends assumption is correct, then mistakenly assuming ignorability will underestimate the effect. We show that the same bracketing relationship holds in general nonparametric (model-free) settings. We also extend the result to semiparametric estimation based on inverse probability weighting.
We develop sensitivity analyses for weak nulls in matched observational studies while allowing unit-level treatment effects to vary. In contrast to randomized experiments and paired observational studies, we show for general matched designs that over a large class of test statistics, any valid sensitivity analysis for the weak null must be unnecessarily conservative if Fisher's sharp null of no treatment effect for any individual also holds. We present a sensitivity analysis valid for the weak null, and illustrate why it is conservative if the sharp null holds through connections to inverse probability weighted estimators. An alternative procedure is presented that is asymptotically sharp if treatment effects are constant, and is valid for the weak null under additional assumptions which may be deemed reasonable by practitioners. The methods may be applied to matched observational studies constructed using any optimal without-replacement matching algorithm, allowing practitioners to assess robustness to hidden bias while allowing for treatment effect heterogeneity.
The world of health care is full of policy interventions: a state expands eligibility rules for its Medicaid program, a medical society changes its recommendations for screening frequency, a hospital implements a new care coordination program. After a policy change, we often want to know, “Did it work?” This is a causal question; we want to know whether the policy CAUSED outcomes to change. One popular way of estimating causal effects of policy interventions is a difference-in-differences study. In this controlled pre-post design, we measure the change in outcomes of people who are exposed to the new policy, comparing average outcomes before and after the policy is implemented. We contrast that change to the change over the same time period in people who were not exposed to the new policy. The differential change in the treated group’s outcomes, compared to the change in the comparison group’s outcomes, may be interpreted as the causal effect of the policy. To do so, we must assume that the comparison group’s outcome change is a good proxy for the treated group’s (counterfactual) outcome change in the absence of the policy. This conceptual simplicity and wide applicability in policy settings makes difference-in-differences an appealing study design. However, the apparent simplicity belies a thicket of conceptual, causal, and statistical complexity. In this talk, I will introduce the fundamentals of difference-in-differences studies and discuss recent innovations including key assumptions and ways to assess their plausibility, estimation, inference, and robustness checks.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
A fundamental feature of evaluating causal health effects of air quality regulations is that air pollution moves through space, rendering health outcomes at a particular population location dependent upon regulatory actions taken at multiple, possibly distant, pollution sources. Motivated by studies of the public-health impacts of power plant regulations in the U.S., this talk introduces the novel setting of bipartite causal inference with interference, which arises when 1) treatments are defined on observational units that are distinct from those at which outcomes are measured and 2) there is interference between units in the sense that outcomes for some units depend on the treatments assigned to many other units. Interference in this setting arises due to complex exposure patterns dictated by physical-chemical atmospheric processes of pollution transport, with intervention effects framed as propagating across a bipartite network of power plants and residential zip codes. New causal estimands are introduced for the bipartite setting, along with an estimation approach based on generalized propensity scores for treatments on a network. The new methods are deployed to estimate how emission-reduction technologies implemented at coal-fired power plants causally affect health outcomes among Medicare beneficiaries in the U.S.
Laine Thomas presented information about how causal inference is being used to determine the cost/benefit of the two most common surgical surgical treatments for women - hysterectomy and myomectomy.
We provide an overview of some recent developments in machine learning tools for dynamic treatment regime discovery in precision medicine. The first development is a new off-policy reinforcement learning tool for continual learning in mobile health to enable patients with type 1 diabetes to exercise safely. The second development is a new inverse reinforcement learning tools which enables use of observational data to learn how clinicians balance competing priorities for treating depression and mania in patients with bipolar disorder. Both practical and technical challenges are discussed.
The method of differences-in-differences (DID) is widely used to estimate causal effects. The primary advantage of DID is that it can account for time-invariant bias from unobserved confounders. However, the standard DID estimator will be biased if there is an interaction between history in the after period and the groups. That is, bias will be present if an event besides the treatment occurs at the same time and affects the treated group in a differential fashion. We present a method of bounds based on DID that accounts for an unmeasured confounder that has a differential effect in the post-treatment time period. These DID bracketing bounds are simple to implement and only require partitioning the controls into two separate groups. We also develop two key extensions for DID bracketing bounds. First, we develop a new falsification test to probe the key assumption that is necessary for the bounds estimator to provide consistent estimates of the treatment effect. Next, we develop a method of sensitivity analysis that adjusts the bounds for possible bias based on differences between the treated and control units from the pretreatment period. We apply these DID bracketing bounds and the new methods we develop to an application on the effect of voter identification laws on turnout. Specifically, we focus estimating whether the enactment of voter identification laws in Georgia and Indiana had an effect on voter turnout.
We study experimental design in large-scale stochastic systems with substantial uncertainty and structured cross-unit interference. We consider the problem of a platform that seeks to optimize supply-side payments p in a centralized marketplace where different suppliers interact via their effects on the overall supply-demand equilibrium, and propose a class of local experimentation schemes that can be used to optimize these payments without perturbing the overall market equilibrium. We show that, as the system size grows, our scheme can estimate the gradient of the platform’s utility with respect to p while perturbing the overall market equilibrium by only a vanishingly small amount. We can then use these gradient estimates to optimize p via any stochastic first-order optimization method. These results stem from the insight that, while the system involves a large number of interacting units, any interference can only be channeled through a small number of key statistics, and this structure allows us to accurately predict feedback effects that arise from global system changes using only information collected while remaining in equilibrium.
We discuss a general roadmap for generating causal inference based on observational studies used to general real world evidence. We review targeted minimum loss estimation (TMLE), which provides a general template for the construction of asymptotically efficient plug-in estimators of a target estimand for realistic (i.e, infinite dimensional) statistical models. TMLE is a two stage procedure that first involves using ensemble machine learning termed super-learning to estimate the relevant stochastic relations between the treatment, censoring, covariates and outcome of interest. The super-learner allows one to fully utilize all the advances in machine learning (in addition to more conventional parametric model based estimators) to build a single most powerful ensemble machine learning algorithm. We present Highly Adaptive Lasso as an important machine learning algorithm to include.
In the second step, the TMLE involves maximizing a parametric likelihood along a so-called least favorable parametric model through the super-learner fit of the relevant stochastic relations in the observed data. This second step bridges the state of the art in machine learning to estimators of target estimands for which statistical inference is available (i.e, confidence intervals, p-values etc). We also review recent advances in collaborative TMLE in which the fit of the treatment and censoring mechanism is tailored w.r.t. performance of TMLE. We also discuss asymptotically valid bootstrap based inference. Simulations and data analyses are provided as demonstrations.
We describe different approaches for specifying models and prior distributions for estimating heterogeneous treatment effects using Bayesian nonparametric models. We make an affirmative case for direct, informative (or partially informative) prior distributions on heterogeneous treatment effects, especially when treatment effect size and treatment effect variation is small relative to other sources of variability. We also consider how to provide scientifically meaningful summaries of complicated, high-dimensional posterior distributions over heterogeneous treatment effects with appropriate measures of uncertainty.
Climate change mitigation has traditionally been analyzed as some version of a public goods game (PGG) in which a group is most successful if everybody contributes, but players are best off individually by not contributing anything (i.e., “free-riding”)—thereby creating a social dilemma. Analysis of climate change using the PGG and its variants has helped explain why global cooperation on GHG reductions is so difficult, as nations have an incentive to free-ride on the reductions of others. Rather than inspire collective action, it seems that the lack of progress in addressing the climate crisis is driving the search for a “quick fix” technological solution that circumvents the need for cooperation.
This seminar discussed ways in which to produce professional academic writing, from academic papers to research proposals or technical writing in general.
Machine learning (including deep and reinforcement learning) and blockchain are two of the most noticeable technologies in recent years. The first one is the foundation of artificial intelligence and big data, and the second one has significantly disrupted the financial industry. Both technologies are data-driven, and thus there are rapidly growing interests in integrating them for more secure and efficient data sharing and analysis. In this paper, we review the research on combining blockchain and machine learning technologies and demonstrate that they can collaborate efficiently and effectively. In the end, we point out some future directions and expect more researches on deeper integration of the two promising technologies.
In this talk, we discuss QuTrack, a Blockchain-based approach to track experiment and model changes primarily for AI and ML models. In addition, we discuss how change analytics can be used for process improvement and to enhance the model development and deployment processes.
More from The Statistical and Applied Mathematical Sciences Institute (20)
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Program on Mathematical and Statistical Methods for Climate and the Earth System Opening Workshop, Projecting Health Impacts of Climate Change: Embracing an Uncertain Future - Howard Chang, Aug 23, 2017
1. Linking Environmental and Health Data for
Epidemiologic Research
Howard Chang
howard.chang@emory.edu
Department of Biostatistics and Bioinformatics, Emory University
SAMSI CLM Opening Workshop
August 2017
2. Health Impact Assessment
Analytic framework for evaluating how a policy
or program affects population health.
Health
Data
Weather
Data
1. Health Effect
Estimation
Population
Projection
Climate
Simulations
2. Bias Correction
3. Health Impact
Projection
3. Health Effects of Environmental Risks
Exposures Health Outcomes
• Temperature
• Air pollution
• Heat waves
• Wildfire
• Heavy rainfall
• Pollen
• Mold
• Salt water intrusion
• Mortality
• Hospital admissions
• Emergency room visits
• Adverse birth outcomes
• Diarrheal diseases
• Vector-born diseases
What are the relevant spatial and temporal scale?
4. Challenges in Environmental Epidemiology
• Observational and retrospective.
• Small effects but ubiquitous exposures.
• Data are complex, incomplete, and messy.
• Both health and exposure data have spatial and
temporal dependence.
• Focus on causal inference: bias, measurement
error and confounding,
5. Health Effect Estimation – Health Data
Administrative Databases
• Death and birth certificates
• Medicare and hospital billing records
• Medical records
Advantages
• Individual-level data
• Centralized and cost efficient
Disadvantages
• Missing important individual-level confounders
• Crude geographical information/history
6. Some Research Interests
• Identify susceptible and vulnerable populations.
• Improve exposure assessment methods for
health effect/impact analyses.
• Estimating joint effects of multiple exposures.
• Evaluate environmental regulations/policies
(accountability) and mitigation strategies.
7. Atlanta Emergency Department Visits
• 20-county Atlanta
metropolitan area.
• Billing records of ED visits
between 1993 to 2012.
• ≈ 10 million individual
records
8. Daily Time-Series Analysis
2002 2004 2006 2008 2010
2000300040005000
Daily Emergency Department Visits in Atlanta
Counts
2002 2004 2006 2008 2010
010203040
Daily Maximum Temperature
Calendar Date
Temperature(Celsius)
9. Daily Time-Series Analysis
Daily total ED counts
Daily
Temperature
(e.g. ATL Airport)
Seasonal trend
Long-term trend
Day-of-week
Proxies for:
Diet
Access to care
Population health
log[𝐸 𝑌𝑡 ] =𝛽𝑋𝑡 + Confounders
10. ED Visits and Temperature
Broad Outcome Groups among the Elderly
INTERN GI DIA FEI CIRC RESP REN
AL
INTERN = all internal causes
GI = intestinal infections
DIA = diabetes
FEI = fluid & electrolyte
imbalance
CIRC = all circulatory diseases
RESP = all respiratory
diseases
RENAL = all renal diseases
*Associations of lag 0 TMX and ED visit outcomes based on primary ICD-9 codes
*RRs for TMX changes from 27 oC to 32 oC (25th to 75th percentile)
Winquist A, Grundstein AJ, Chang HH, Hess J Sarnat SE (2016). Environmental Research 147, 314-323.
11. ED Visits and Heat Waves
Heat wave definitions:
• Heat wave period = ≥2 consecutive days with daily metric
beyond the 98th percentiles.
• The first day of a heat wave period treated removed.
• Max/Avg/Min temperature and apparent temperature.
The added burden of sustained high temperature.
12. ED Visit-Heat Wave, Atlanta 1993-2012
Chen T, Sarnat SE, Grundstein AJ, Winquist A, Chang HH (2017). Environmental Health Perspectives. DOI:10.1289/EHP44
14. Preterm Birth-Heat Wave, Atlanta 1994-2006
0.6
1.2
RR(95%ConfidenceInterval)
0.8
1.4
1.0
Daily counts of preterm birth with joint strata of gestational week,
maternal race, and maternal education, and offset by the number of
fetuses-at-risk of preterm birth on each day.
Darrow LA, Strickland MJ, Chang HH (2015) Society for Epidemiologic Research Annual Meeting. Denver, Colorado.
15. Pediatric ED Asthma Visit and Temperature
O’Lenick CR, Winquist A, Chang HH, Kramer MR, Mulholland JA, , Grundstein AJ, Sarnat SE (2017) Environmental Research,
156, 132-144.
17. Data Integration for Exposure Assessment
Exposure
Monitoring
Measurement
Satellite
Imagery
Computer
Model
Simulation
Meteorology
Physical
Variables
18. Fine-Scale Temperature Assessment
Daytime Nighttime
1 km spatial resolution over metropolitan Houston for daytime (left; 4:00 pm)
and nighttime (right; 12:00 am), created with the HRLDAS model.
Monaghan AJ, Hu L, Brunsell NA, Barlage M, Wilhelmi OV (2014). Journal of Geophysical Research: Atmospheres, 119(11),
6376-6392.
24. Diarrhea and Environmental Drivers
21 Villages in Ecuador Weekly Diarrhea Incidence
Topographic Data Weather Stations
NSF 1360330 (PI: Remais)
25. Exposure Assessment for Precipitation
0.25 x 0.25 degree grid of the TRMM 3B42 satellite platform.
Deshpande A, Chang HH, Levy K (2017). Health Related Water Microbiology / Water Microbiology Conference.
26. Precipitation and Diarrheal Diseases
0.50
0.75
1.00
1.25
1.50
HRE, Dry AC HRE, Wet AC No HRE, Dry AC
ExpectedCountRatio
Expected Count Ratios of Diarrhea by Environmental Condition
in Rural Parishes at Lag of 7 Days
Associations between:
1. Daily counts of diarrheal disease in Esmeraldas, Ecuador
2. Heavy rain fall events (HRE) by antecedent conditions (AC)
27. Improving Exposure Assessment
• How do combine various data sources?
Monitoring networks
Model simulations
Satellite retrievals
• Analytical challenges
Missing data (spatio-temporal, informative)
Multiple exposures
Large datasets
Exposure prediction error
28. Accountability Research
Henneman LRF, Chang HH, Liao KJ, Lavoue D, Mulholland JA, Russell AG (2017). Air Quality, Atmosphere & Health DOI 10.1007/s11869-017-0463-2
29. Impacts of Emission versus Climate
Stowell JD, Kim YM, Gao Y, Fu JS, Chang HH, Liu Y (2017). Environment International, 108:41-50.
30. Health Effects of Multiple Exposures
X2
Y
X1
X2
Y
X1
YX1 X2
Joint Effects Effect Modification
Mediation