PMED Opening Workshop - Regrowth Rates of Tumors after Radiation Vary Depending on Differences in Patient Immune Systems: a compartment model - Dorothy Wallace, August 15, 2018
A simple immune system model of effector cells and regulator cells is coupled to a compartment model tuned to neuroblastoma xenograft data. Radiation is modeled as a raised death rate on a fixed day that reduces tumor size by transferring a fraction of cells into a compartment of “doomed” cells whose presence can be detected by effector cells, producing an immune response. The model predicts that the
time required to regrow the tumor to its initial size is positively correlated with the ratio of effector to regulator cells naturally present in the patient. Individual immune cell distribution is likely to affect the patient’s response to radiation therapy.
Integrative bioinformatics analysis of Parkinson's disease related omics dataEnrico Glaab
Presentation on statistical meta analysis of omics data from Parkinson's disease case-control studies. The results are used for a comparative analysis against aging-related omics alterations in the brain and a prioritization of new candidate disease genes using the phenologs approach.
BioNetVisA 2018 ECCB workshop
From biological network reconstruction to data visualization and analysis in molecular biology and medicine.
http://eccb18.org/workshop-2/
https://bionetvisa.github.io/
Integrative bioinformatics analysis of Parkinson's disease related omics dataEnrico Glaab
Presentation on statistical meta analysis of omics data from Parkinson's disease case-control studies. The results are used for a comparative analysis against aging-related omics alterations in the brain and a prioritization of new candidate disease genes using the phenologs approach.
BioNetVisA 2018 ECCB workshop
From biological network reconstruction to data visualization and analysis in molecular biology and medicine.
http://eccb18.org/workshop-2/
https://bionetvisa.github.io/
Dr. Patrick Hwu presents the latest information on immunotherapies for melanoma at the MRF's Patient Symposium at MD Anderson Cancer Center on January 31, 2015.
Chimeric Antigen Receptors (paper with corresponding power point)Kevin B Hugins
Gene therapy was first conceptualized to alter debilitating fates of genetic diseases. Gene therapy technology can help introduce new functional DNA to replace mutated genes. The idea first arose in 1972 when Friedmann and Roblin authored a paper, “Gene therapy for human genetic disease?”, demonstrating that exogenous DNA can be taken up by mammalian cells (1). They proposed that the same procedure could be done on humans to correct genetic defects by introducing therapeutic DNA. Currently, genetic modification of T lymphocytes has been the major area of research for treating malignant tumors. This technique seeks to create chimeric antigen receptor (CAR) in T cells by genetically modifying them in vitro and reintroduce them back into blood circulation. The T cells are unique to every patient and the chimeric antigen receptors are unique to the tumor that it is targeting.
Lung cancer is a major cause of cancer deaths with approximately 80% of cases accounting to nonsmall cell lung cancer (NSCLC) . In NSCLC target therapy, epidermal growth factor receptor (EGFR) is a promising candidate.
Stratification of TCGA melanoma patients according to Tumor Infiltrative CD8...Antonio Ahn
The tumour microenvironment, namely the interaction between immune cells and tumour cells plays a crucial role in the treatment outcome of immunotherapy.
In order to predict patient responses to immunotherapy a tumour stratification framework has been proposed based on PD-L1 expression and presence of CD8 Tumour Infiltrative Lymphocytes (TIL).
Advances in genomic technologies and computational tools now allow to determine compositions of different immune cell infiltrates in bulk tumors with increasing accuracy and resolution. Our aim here was to use RNA-seq and methylation 450k data to stratify 469 melanoma patients in TCGA dataset according to the presence of CD8 Tumour Infiltrative Lymphocytes (TIL) and PD-L1 mRNA expression.
Prof. Mark Coles (Oxford University) - Data-driven systems medicinemntbs1
The summary of Prof. Mark Coles' presentation from the Jun 11-12th 2019 event Data-driven systems medicine at Cardiff University Brain Research Imaging Centre.
Sipuleucel_T Immunotherapy for Metastatic Prostate Cancer after Failing Hormo...mjavan2001
This PowerPoint presentation demonstrates findings on a clinical trial of sipuleucel-T in HRPC patients to evaluate overall survival in this group. The FDA approval of Provenge was based on the results of IMPACT study.
A Rare International Dialogue (Sunday, May 12, 2019)
Theme One: Diagnosis and Beyond
WORKSHOP G: Cell and Gene Therapy from Laboratory to Market - Mark Lundie, Pfizer Canada
Ebmt 2018 gps in mm koehne et al_with suppl slides_final_final_1.1.12_mar2018...Nicholas Sarlis
Galinpepimut-S (WT1-targeting peptide vaccine) in high-risk multiple myeloma. Final results from a Phase 2 clinical study. Koehne G, et al. EBMT 2018 slide presentation.
This presentation summarizes data related to the CAR-T cell technology and its potential application for cancer therapy. This oral presentation was presented at the 39th PAMM winter meeting in Roma the 8th f February 2018 by Eric Raymond
Newer biomarkers,techniques & their inclusion in 2016 WHO classification for leukaemia/lymphomas increases the responsibility of the pathologists, requiring to develop an integrated multidisciplinary approach for reporting.
Cancer is one of the most challenging diseases and up until now. One of the most challenging things about cancer treatment is not the cure itself but the differentiation between the tumor cells and the normal cells. Most of the medical treatments of the cancer today cannot differentiate between the cancer cells and the normal one as well as it damages the hall tissue and it is still considered as a low-effect treatment to be applied in cancer. One of the most popular treatments of this kind is chemotherapy which is known for damaging the hall cells, cancer, and normal ones. Our research is focusing on generating a new therapy that can target the cancer cell itself so it will give us more efficiency ratio to stop cancer and will keep the other cells without any damage. We will use an antibody body for the protein antigen ErbB-2 which is located rabidly in the lung cancer cells' membrane surface. These antibodies will be produced by the immune system so it will target the tumor cells especially and stop the cell growth and damage it in some cases.
Dr. Patrick Hwu presents the latest information on immunotherapies for melanoma at the MRF's Patient Symposium at MD Anderson Cancer Center on January 31, 2015.
Chimeric Antigen Receptors (paper with corresponding power point)Kevin B Hugins
Gene therapy was first conceptualized to alter debilitating fates of genetic diseases. Gene therapy technology can help introduce new functional DNA to replace mutated genes. The idea first arose in 1972 when Friedmann and Roblin authored a paper, “Gene therapy for human genetic disease?”, demonstrating that exogenous DNA can be taken up by mammalian cells (1). They proposed that the same procedure could be done on humans to correct genetic defects by introducing therapeutic DNA. Currently, genetic modification of T lymphocytes has been the major area of research for treating malignant tumors. This technique seeks to create chimeric antigen receptor (CAR) in T cells by genetically modifying them in vitro and reintroduce them back into blood circulation. The T cells are unique to every patient and the chimeric antigen receptors are unique to the tumor that it is targeting.
Lung cancer is a major cause of cancer deaths with approximately 80% of cases accounting to nonsmall cell lung cancer (NSCLC) . In NSCLC target therapy, epidermal growth factor receptor (EGFR) is a promising candidate.
Stratification of TCGA melanoma patients according to Tumor Infiltrative CD8...Antonio Ahn
The tumour microenvironment, namely the interaction between immune cells and tumour cells plays a crucial role in the treatment outcome of immunotherapy.
In order to predict patient responses to immunotherapy a tumour stratification framework has been proposed based on PD-L1 expression and presence of CD8 Tumour Infiltrative Lymphocytes (TIL).
Advances in genomic technologies and computational tools now allow to determine compositions of different immune cell infiltrates in bulk tumors with increasing accuracy and resolution. Our aim here was to use RNA-seq and methylation 450k data to stratify 469 melanoma patients in TCGA dataset according to the presence of CD8 Tumour Infiltrative Lymphocytes (TIL) and PD-L1 mRNA expression.
Prof. Mark Coles (Oxford University) - Data-driven systems medicinemntbs1
The summary of Prof. Mark Coles' presentation from the Jun 11-12th 2019 event Data-driven systems medicine at Cardiff University Brain Research Imaging Centre.
Sipuleucel_T Immunotherapy for Metastatic Prostate Cancer after Failing Hormo...mjavan2001
This PowerPoint presentation demonstrates findings on a clinical trial of sipuleucel-T in HRPC patients to evaluate overall survival in this group. The FDA approval of Provenge was based on the results of IMPACT study.
A Rare International Dialogue (Sunday, May 12, 2019)
Theme One: Diagnosis and Beyond
WORKSHOP G: Cell and Gene Therapy from Laboratory to Market - Mark Lundie, Pfizer Canada
Ebmt 2018 gps in mm koehne et al_with suppl slides_final_final_1.1.12_mar2018...Nicholas Sarlis
Galinpepimut-S (WT1-targeting peptide vaccine) in high-risk multiple myeloma. Final results from a Phase 2 clinical study. Koehne G, et al. EBMT 2018 slide presentation.
This presentation summarizes data related to the CAR-T cell technology and its potential application for cancer therapy. This oral presentation was presented at the 39th PAMM winter meeting in Roma the 8th f February 2018 by Eric Raymond
Newer biomarkers,techniques & their inclusion in 2016 WHO classification for leukaemia/lymphomas increases the responsibility of the pathologists, requiring to develop an integrated multidisciplinary approach for reporting.
Similar to PMED Opening Workshop - Regrowth Rates of Tumors after Radiation Vary Depending on Differences in Patient Immune Systems: a compartment model - Dorothy Wallace, August 15, 2018
Cancer is one of the most challenging diseases and up until now. One of the most challenging things about cancer treatment is not the cure itself but the differentiation between the tumor cells and the normal cells. Most of the medical treatments of the cancer today cannot differentiate between the cancer cells and the normal one as well as it damages the hall tissue and it is still considered as a low-effect treatment to be applied in cancer. One of the most popular treatments of this kind is chemotherapy which is known for damaging the hall cells, cancer, and normal ones. Our research is focusing on generating a new therapy that can target the cancer cell itself so it will give us more efficiency ratio to stop cancer and will keep the other cells without any damage. We will use an antibody body for the protein antigen ErbB-2 which is located rabidly in the lung cancer cells' membrane surface. These antibodies will be produced by the immune system so it will target the tumor cells especially and stop the cell growth and damage it in some cases.
Autologous Bone Marrow Cell Therapy for Autism: An Open Label Uncontrolled C...remedypublications2
The aim of this study is to assess the safety and effectiveness of autologous bone marrow
mononuclear stem cell (BMMNC) transplantation in patients with autism.
Dr. Christopher Yau (University of Birmingham) - Data-driven systems medicinemntbs1
The summary of Dr. Christopher Yau's presentation from the Jun 11-12th 2019 event Data-driven systems medicine at Cardiff University Brain Research Imaging Centre.
2D CAT Based Modeling of Tumour Growth and Drug TransportEditor IJMTER
The transition of normal cells in the tissue that leads into tumour and spreads throughout
depending upon its behavioural conditions is a complex biological process studied through the use of
both in vivo and in vitro experimentation. Mathematical models provide the approach by using a
controlled environment in which a system can be described quantitatively. This can also yield data
which predicts the behaviour of cells and likely medical conditions after thorough analysis by the
modeller. In an effort to study the characteristics that increase cell fitness, the paper presents a 2D
Cellular Automaton model that uses computer simulation to describe the invasion of healthy tissue
by cancer cells. The growth process is simulated and it was found that movement of cells affects
tumour growth rate. It was also found that the relative distance of the tumour initiation area from
neighbouring vessels influences the growth of tumour. The model and the simulation software
developed thus can be used to understand the dynamics of early tumour growth and to explore
various hypotheses of tumour growth relevant to drug delivery in chemotherapy. Importantly, this
approach highlights that vessel displacement should not be neglected in tumour growth models. The
paper thus presents two models i.e cancer growth model and drug transport model for tumour growth
and treatment that will help to diagnose the early tumour growth. Though cancer is uncurable;early
and quick detection of cancer will help doctors in better way by suggesting quick remedial action
against it.
HEART DISEASES PREDICTION USING MACHINE LEARNING ALGORITHMPoojaSri45
Implemented a machine learning project aimed at predicting heart diseases using various algorithms and techniques. Developed as a part of academic or professional endeavor, the project demonstrates proficiency in data preprocessing, feature selection, model training, and evaluation.
Probability Models for Estimating Haplotype Frequencies and Bayesian Survival...Université de Dschang
M. Kum Cletus Kwa a soutenu une thèse de Doctorat/Phd en mathématiques ce 14 juin 2016 à l'Université de Dschang. Le jury lui a décerné à l'issue des échanges la mention très honorable.
Stereotactic Radiotherapy of Recurrent Malignant Gliomas Clinical White PaperBrainlab
Learn more: https://www.brainlab.com/intraoperative-mri
Tumors of the central nervous system (CNS) represent approximately 176,000 newly diagnosed cases worldwide per year, with an estimated annual mortality of 128,000. Malignant gliomas comprise 30% of all primary CNS tumors and remain one of the greatest challenges in oncology today, despite access to state-of-the-art surgery, imaging, radiotherapy and chemotherapy.
Similar to PMED Opening Workshop - Regrowth Rates of Tumors after Radiation Vary Depending on Differences in Patient Immune Systems: a compartment model - Dorothy Wallace, August 15, 2018 (20)
Recently, the machine learning community has expressed strong interest in applying latent variable modeling strategies to causal inference problems with unobserved confounding. Here, I discuss one of the big debates that occurred over the past year, and how we can move forward. I will focus specifically on the failure of point identification in this setting, and discuss how this can be used to design flexible sensitivity analyses that cleanly separate identified and unidentified components of the causal model.
I will discuss paradigmatic statistical models of inference and learning from high dimensional data, such as sparse PCA and the perceptron neural network, in the sub-linear sparsity regime. In this limit the underlying hidden signal, i.e., the low-rank matrix in PCA or the neural network weights, has a number of non-zero components that scales sub-linearly with the total dimension of the vector. I will provide explicit low-dimensional variational formulas for the asymptotic mutual information between the signal and the data in suitable sparse limits. In the setting of support recovery these formulas imply sharp 0-1 phase transitions for the asymptotic minimum mean-square-error (or generalization error in the neural network setting). A similar phase transition was analyzed recently in the context of sparse high-dimensional linear regression by Reeves et al.
Many different measurement techniques are used to record neural activity in the brains of different organisms, including fMRI, EEG, MEG, lightsheet microscopy and direct recordings with electrodes. Each of these measurement modes have their advantages and disadvantages concerning the resolution of the data in space and time, the directness of measurement of the neural activity and which organisms they can be applied to. For some of these modes and for some organisms, significant amounts of data are now available in large standardized open-source datasets. I will report on our efforts to apply causal discovery algorithms to, among others, fMRI data from the Human Connectome Project, and to lightsheet microscopy data from zebrafish larvae. In particular, I will focus on the challenges we have faced both in terms of the nature of the data and the computational features of the discovery algorithms, as well as the modeling of experimental interventions.
Bayesian Additive Regression Trees (BART) has been shown to be an effective framework for modeling nonlinear regression functions, with strong predictive performance in a variety of contexts. The BART prior over a regression function is defined by independent prior distributions on tree structure and leaf or end-node parameters. In observational data settings, Bayesian Causal Forests (BCF) has successfully adapted BART for estimating heterogeneous treatment effects, particularly in cases where standard methods yield biased estimates due to strong confounding.
We introduce BART with Targeted Smoothing, an extension which induces smoothness over a single covariate by replacing independent Gaussian leaf priors with smooth functions. We then introduce a new version of the Bayesian Causal Forest prior, which incorporates targeted smoothing for modeling heterogeneous treatment effects which vary smoothly over a target covariate. We demonstrate the utility of this approach by applying our model to a timely women's health and policy problem: comparing two dosing regimens for an early medical abortion protocol, where the outcome of interest is the probability of a successful early medical abortion procedure at varying gestational ages, conditional on patient covariates. We discuss the benefits of this approach in other women’s health and obstetrics modeling problems where gestational age is a typical covariate.
Difference-in-differences is a widely used evaluation strategy that draws causal inference from observational panel data. Its causal identification relies on the assumption of parallel trends, which is scale-dependent and may be questionable in some applications. A common alternative is a regression model that adjusts for the lagged dependent variable, which rests on the assumption of ignorability conditional on past outcomes. In the context of linear models, Angrist and Pischke (2009) show that the difference-in-differences and lagged-dependent-variable regression estimates have a bracketing relationship. Namely, for a true positive effect, if ignorability is correct, then mistakenly assuming parallel trends will overestimate the effect; in contrast, if the parallel trends assumption is correct, then mistakenly assuming ignorability will underestimate the effect. We show that the same bracketing relationship holds in general nonparametric (model-free) settings. We also extend the result to semiparametric estimation based on inverse probability weighting.
We develop sensitivity analyses for weak nulls in matched observational studies while allowing unit-level treatment effects to vary. In contrast to randomized experiments and paired observational studies, we show for general matched designs that over a large class of test statistics, any valid sensitivity analysis for the weak null must be unnecessarily conservative if Fisher's sharp null of no treatment effect for any individual also holds. We present a sensitivity analysis valid for the weak null, and illustrate why it is conservative if the sharp null holds through connections to inverse probability weighted estimators. An alternative procedure is presented that is asymptotically sharp if treatment effects are constant, and is valid for the weak null under additional assumptions which may be deemed reasonable by practitioners. The methods may be applied to matched observational studies constructed using any optimal without-replacement matching algorithm, allowing practitioners to assess robustness to hidden bias while allowing for treatment effect heterogeneity.
The world of health care is full of policy interventions: a state expands eligibility rules for its Medicaid program, a medical society changes its recommendations for screening frequency, a hospital implements a new care coordination program. After a policy change, we often want to know, “Did it work?” This is a causal question; we want to know whether the policy CAUSED outcomes to change. One popular way of estimating causal effects of policy interventions is a difference-in-differences study. In this controlled pre-post design, we measure the change in outcomes of people who are exposed to the new policy, comparing average outcomes before and after the policy is implemented. We contrast that change to the change over the same time period in people who were not exposed to the new policy. The differential change in the treated group’s outcomes, compared to the change in the comparison group’s outcomes, may be interpreted as the causal effect of the policy. To do so, we must assume that the comparison group’s outcome change is a good proxy for the treated group’s (counterfactual) outcome change in the absence of the policy. This conceptual simplicity and wide applicability in policy settings makes difference-in-differences an appealing study design. However, the apparent simplicity belies a thicket of conceptual, causal, and statistical complexity. In this talk, I will introduce the fundamentals of difference-in-differences studies and discuss recent innovations including key assumptions and ways to assess their plausibility, estimation, inference, and robustness checks.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
A fundamental feature of evaluating causal health effects of air quality regulations is that air pollution moves through space, rendering health outcomes at a particular population location dependent upon regulatory actions taken at multiple, possibly distant, pollution sources. Motivated by studies of the public-health impacts of power plant regulations in the U.S., this talk introduces the novel setting of bipartite causal inference with interference, which arises when 1) treatments are defined on observational units that are distinct from those at which outcomes are measured and 2) there is interference between units in the sense that outcomes for some units depend on the treatments assigned to many other units. Interference in this setting arises due to complex exposure patterns dictated by physical-chemical atmospheric processes of pollution transport, with intervention effects framed as propagating across a bipartite network of power plants and residential zip codes. New causal estimands are introduced for the bipartite setting, along with an estimation approach based on generalized propensity scores for treatments on a network. The new methods are deployed to estimate how emission-reduction technologies implemented at coal-fired power plants causally affect health outcomes among Medicare beneficiaries in the U.S.
Laine Thomas presented information about how causal inference is being used to determine the cost/benefit of the two most common surgical surgical treatments for women - hysterectomy and myomectomy.
We provide an overview of some recent developments in machine learning tools for dynamic treatment regime discovery in precision medicine. The first development is a new off-policy reinforcement learning tool for continual learning in mobile health to enable patients with type 1 diabetes to exercise safely. The second development is a new inverse reinforcement learning tools which enables use of observational data to learn how clinicians balance competing priorities for treating depression and mania in patients with bipolar disorder. Both practical and technical challenges are discussed.
The method of differences-in-differences (DID) is widely used to estimate causal effects. The primary advantage of DID is that it can account for time-invariant bias from unobserved confounders. However, the standard DID estimator will be biased if there is an interaction between history in the after period and the groups. That is, bias will be present if an event besides the treatment occurs at the same time and affects the treated group in a differential fashion. We present a method of bounds based on DID that accounts for an unmeasured confounder that has a differential effect in the post-treatment time period. These DID bracketing bounds are simple to implement and only require partitioning the controls into two separate groups. We also develop two key extensions for DID bracketing bounds. First, we develop a new falsification test to probe the key assumption that is necessary for the bounds estimator to provide consistent estimates of the treatment effect. Next, we develop a method of sensitivity analysis that adjusts the bounds for possible bias based on differences between the treated and control units from the pretreatment period. We apply these DID bracketing bounds and the new methods we develop to an application on the effect of voter identification laws on turnout. Specifically, we focus estimating whether the enactment of voter identification laws in Georgia and Indiana had an effect on voter turnout.
We study experimental design in large-scale stochastic systems with substantial uncertainty and structured cross-unit interference. We consider the problem of a platform that seeks to optimize supply-side payments p in a centralized marketplace where different suppliers interact via their effects on the overall supply-demand equilibrium, and propose a class of local experimentation schemes that can be used to optimize these payments without perturbing the overall market equilibrium. We show that, as the system size grows, our scheme can estimate the gradient of the platform’s utility with respect to p while perturbing the overall market equilibrium by only a vanishingly small amount. We can then use these gradient estimates to optimize p via any stochastic first-order optimization method. These results stem from the insight that, while the system involves a large number of interacting units, any interference can only be channeled through a small number of key statistics, and this structure allows us to accurately predict feedback effects that arise from global system changes using only information collected while remaining in equilibrium.
We discuss a general roadmap for generating causal inference based on observational studies used to general real world evidence. We review targeted minimum loss estimation (TMLE), which provides a general template for the construction of asymptotically efficient plug-in estimators of a target estimand for realistic (i.e, infinite dimensional) statistical models. TMLE is a two stage procedure that first involves using ensemble machine learning termed super-learning to estimate the relevant stochastic relations between the treatment, censoring, covariates and outcome of interest. The super-learner allows one to fully utilize all the advances in machine learning (in addition to more conventional parametric model based estimators) to build a single most powerful ensemble machine learning algorithm. We present Highly Adaptive Lasso as an important machine learning algorithm to include.
In the second step, the TMLE involves maximizing a parametric likelihood along a so-called least favorable parametric model through the super-learner fit of the relevant stochastic relations in the observed data. This second step bridges the state of the art in machine learning to estimators of target estimands for which statistical inference is available (i.e, confidence intervals, p-values etc). We also review recent advances in collaborative TMLE in which the fit of the treatment and censoring mechanism is tailored w.r.t. performance of TMLE. We also discuss asymptotically valid bootstrap based inference. Simulations and data analyses are provided as demonstrations.
We describe different approaches for specifying models and prior distributions for estimating heterogeneous treatment effects using Bayesian nonparametric models. We make an affirmative case for direct, informative (or partially informative) prior distributions on heterogeneous treatment effects, especially when treatment effect size and treatment effect variation is small relative to other sources of variability. We also consider how to provide scientifically meaningful summaries of complicated, high-dimensional posterior distributions over heterogeneous treatment effects with appropriate measures of uncertainty.
Climate change mitigation has traditionally been analyzed as some version of a public goods game (PGG) in which a group is most successful if everybody contributes, but players are best off individually by not contributing anything (i.e., “free-riding”)—thereby creating a social dilemma. Analysis of climate change using the PGG and its variants has helped explain why global cooperation on GHG reductions is so difficult, as nations have an incentive to free-ride on the reductions of others. Rather than inspire collective action, it seems that the lack of progress in addressing the climate crisis is driving the search for a “quick fix” technological solution that circumvents the need for cooperation.
This seminar discussed ways in which to produce professional academic writing, from academic papers to research proposals or technical writing in general.
Machine learning (including deep and reinforcement learning) and blockchain are two of the most noticeable technologies in recent years. The first one is the foundation of artificial intelligence and big data, and the second one has significantly disrupted the financial industry. Both technologies are data-driven, and thus there are rapidly growing interests in integrating them for more secure and efficient data sharing and analysis. In this paper, we review the research on combining blockchain and machine learning technologies and demonstrate that they can collaborate efficiently and effectively. In the end, we point out some future directions and expect more researches on deeper integration of the two promising technologies.
In this talk, we discuss QuTrack, a Blockchain-based approach to track experiment and model changes primarily for AI and ML models. In addition, we discuss how change analytics can be used for process improvement and to enhance the model development and deployment processes.
More from The Statistical and Applied Mathematical Sciences Institute (20)
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
PMED Opening Workshop - Regrowth Rates of Tumors after Radiation Vary Depending on Differences in Patient Immune Systems: a compartment model - Dorothy Wallace, August 15, 2018
2. ¡ Joint work with Maya Srinivasan and Heiko
Enderling
¡ And also Shannon Fee, JadeYen, Alice Hsu,
Lawrence Abu-Hammour,Yixuan He
¡ Based on a suite of models developed in
collaboration with: Xinyue Guo, Paula Chen,
Michelle Chen, Milan Huynh, Evan Rheingold,
Ann Dunham,Olivia Prosper, Alisa DeStefano,
Sophia Jiang, Celeste Rodriguez, Molly
Carpenter, Rachel Chang
3. ¡ Why does the response to radiation therapy
differ among patients?
¡ The model we are using and why
¡ Prior results from this model
¡ What the model says about the immune
system-tumor interaction.
¡ What the model says about tumor regrowth
after radiation
¡ Caveats and future directions
¡ Implications for personalized medicine
4. Heuvers et al. BMC Cancer 2012, 12:580
http://www.biomedcentral.com/1471-2407/12/580
N2 neutrophils
RegulatoryT cells
M2 macrophages
And others
Tumor progression
Natural killer cells
NKT cells
N1 neutrophils
CD4+ helperT cells
CD8+ cytotoxic cells
M1 Macrophages
And others
Tumor regression
5. ¡ Hypotheses:
¡ The response is mediated by the immune system
interacting with cells damaged by radiation
¡ Assumptions:
¡ We need only represent 2 general classes of immune
cells: ”effector” and “regulator” cells, corresponding
to the two classes in the PCA
¡ The time it takes for a tumor to regrow after radiation
therapy is a measure of the effectiveness of therapy
¡ We can test this hypothesis in silica through a model
6. ¡ Is built as a series of nested models with
features that can be turned on and off
¡ The sub-models represent various kinds of
experiments that can be done with a cell line
¡ This allows us to use multiple published
studies of different sorts to parametrize the
model
¡ And it allows us to test its qualitative
behavior in various situations against
experiment
7. ¡ a
Small amounts
of very
expensive
data!
Almost
too much
data!
Problem driven Theory driven
Image
processing
Ecological
models
Social
networks
8. Monolayer
Spheroid
Xenograft Patient
In vitro,
No hypoxia
No vasculature
No immune
system
In vitro,
Hypoxia
No vasculature
No immune
system
In vivo
Hypoxia
Vasculature
No immune
system
In vivo
Hypoxia
Vasculature
Immune system
9. The smallest submodel is a linear model for unrestrained monolayer growth
Tumor
cells
G1 S G2
2𝑐# 𝐺#
𝑐% 𝐵𝐺% 𝑐' 𝑆
d𝐺#
The linear model can perfectly
match:
1. Observed doubling time
2. Observed natural death rate
3. Observed percentages in each
part of the cell cycle
4. Completely determines all
parameters you see here
5. The model and the monolayer
both grow exponentially
10. 𝑐% 1 − 𝐵 𝐺%
Q
N
𝑐+ 𝐹𝐺%
𝐶𝑄
𝑒𝐻𝑄
𝑚𝑁
𝑐34 𝑄
A
𝑘𝐴
The nonlinear model for spheroid growth includes the interior hypoxic region (Q), a
necrotic core (N) and the action of TNF-alpha (A) to trigger apoptosis of
proliferating cells.
d𝐺%
Tumor
cells
G1 S G2
2𝑐# 𝐺#
𝑐% 𝐵𝐺%
𝑐+ 𝐹𝑆
d𝑆
𝑐' 𝑆
𝑐# 𝐺#
𝑐+ 𝐹𝐺#
d𝐺#
Spheroids are known
to cease growth with a
thin layer of
proliferating cells
balanced byTNF-alpha
mediated death.
The model does this,
whether nutrient
availability is bounded
or unlimited but in
proportion to surface
area.
In addition the model
can approximate
growth patterns seen
in the data.
Carlsson et al Int. J. Cancer: 31,
523-533 (1983)
11. Differences inTherapeutic Indexes of
Combination Metronomic
Chemotherapy and an Anti-VEGFR- Antibody in
Multidrugresistant
Human Breast Cancer Xenografts
Giannoula Klement, Ping Huang, Barbara Mayer,
Shane K. Green, Shan Man, Peter Bohlen,
Daniel Hicklin, and Robert S. Kerbel
Clinical cancer research. 2002 Jan 1;8(1):221-32.
A model for spheroid versus monolayer response
of SK-N-SH neuroblastoma cells to
treatment with 15-deoxy-PGJ2.
Wallace DI, Dunham A, Chen PX, Chen M, Huynh
M, Rheingold E, Prosper O.
Computational and mathematical methods in
medicine.
2016;2016.
Prior results from this model, part 1
In both model and experiment, monolayer and spheroid cultures exhibit different
responses to treatment, with monolayer culture giving the more pronounced effect.
12. Q
N
R
A
V
The model we use, continued.
The xenograft model for immuno-compromised mouse include production of
theVEGF signal (R) that induces vascular growth (V), produced by hypoxic cells
(Q) and also proliferating cells (G1,S,G2) triggered byTNF-alpha (A).
Tumor
cells
Shared
vasculature
Shared signals
G1 S G2
2𝑐# 𝐺# * Most parameters are determined by
monolayer and spheroid data.
* Maximum vasculature growth rate comes
from data on healthy tissue.
* Many transitions are nonlinear rate
bounded functional responses
Xenografts exhibit
an initial phase of
rapid growth,
followed by a
constant growth
rate.
The model does
this also.
13. Control 1 is Ackerman
control data starting at
day 3 of tumor
implantation.
Control 2 is Ackerman
control data starting at
size 18 cubic
micrometers in size.
Control 3 is Segerstrom
control data.
Solid black line is model
control.
Blue diamonds are
treatment data from
Segerstrom
Green line is regression
fit to blue diamonds.
Dotted black line is
model fit to treatment
regrssion line.
Prior results from this model, part 2
Yixuan He, Anita Kodali, and Dorothy I.Wallace.
"Predictive Modeling of Neuroblastoma Growth
Dynamics in Xenograft Model After Bevacizumab
Anti-VEGFTherapy."
Bulletin of Mathematical Biology (2018): 1-23.
14. Q
N
R
A
V
𝑘 𝑣8 + 𝑉 𝐴
The model we use, continued.
Model with immune response including cells damaged by radiation or immune response
(D), “effector” immune cells that destroy tumor cells (E), and “regulator” cells that
suppress the immune response (T). Immune cell recruitment and death parameters
estimated from HIV literature.
D E
T
Tumor
cells
Shared
vasculature
Damaged cells
and immune
response
Shared signals
𝑐; 𝑇 𝑣8 + 𝑉
G1 S G2
𝑟; 𝑣8 + 𝑉
Recruitment,
clearance and
proliferation of
immune cells
mediated by
vasculature.
Tumor cells
damaged by
radiation and/or
immune
response.
15. What the model says about the immune system-tumor interaction.
What follows are all preliminary results.
Same in silica tumor
A. without immune response and
B. with immune response.
No radiation therapy is included in B.This is the control run.We see a slight
reduction in tumor growth rate.
A. B.
16. What the model says about tumor regrowth after radiation
Radiation is modeled as an extra linear relative rate of death over some fixed period,
starting at day 35. Here is the result with default parameters.Tumor growth resumes
after radiation.
17. For the same numerical experiment, this shows the ratio of effector cells (E) to
regulator cells (T) over time.
A. The immune system reaches homeostasis.
B. After radiation, the effector cells increase.
C. This creates a response of regulator cells.
D. Finally the immune system returns to homeostasis.
A B C D
18. What the model says about tumor regrowth after radiation, continued.
Hypothesis (due to Heiko Enderling): The ratio E/T is correlated with the
success of radiation therapy.
To test the hypothesis we need:
1. A reasonable way to vary E/T at the homeostatic ratio. We do this
by varying appropriate parameters.
2. A way to decide how successful radiation therapy was. We do this
by measuring how long it takes the tumor to regrow to its starting
size.
19. Only 7 parameters determine the immune response.
Of these, only 5 were deemed significant based on a simple sensitivity analysis (below).
Of those 5, we were unable to obtain biological measurements for two of them.
The remaining 3 were varied in a range around the reported estimates, to produce varying
values for the ratio E/T.
20. Hypothesis confirmed in silica:
Tumor regrowth time is positively correlated with E/T ratio.
21. ¡ Why does the response to radiation therapy
differ among patients?
¡ The model we are using and why
¡ Prior results from this model
¡ What the model says about the immune
system-tumor interaction.
¡ What the model says about tumor regrowth
after radiation
¡ Caveats and future directions
¡ Implications for personalized medicine
22. ¡ The difference in time to regrowth was only 2 days at
most. OK but those are mouse days.
¡ Two parameters are mysteries and should be varied both
numerically and in some kind of experiment if possible.
¡ Each parameter should be checked separately to see
independent correlations.
¡ It is thought that many tumors succumb to the immune
response and are never detected. If so, the efficiency of
the response is much larger perhaps than we made it
here.
¡ It remains to check the effect of initial tumor size and
composition on the behavior of the model.
23. ¡ How effective radiation therapy is likely to be may
depend on the state of the person’s immune
system.This can be measured in advance of
treatment. The decision whether to treat can be
based on personal data.
¡ Perhaps it is possible to bolster the immune system
in advance of treatment in order to improve the
likelihood of success. The model suggests that
suppressing the regulatory immune cells and/or
increasing the effector cells would be the right
strategy.
24. ¡ Knowing the makeup of the patient’s immune
system should shed light on their likely response
to radiation therapy.
¡ A patient-specific pre-treatment of the immune
system could enhance the results of therapy.
¡ The effector/regulator ratio may be a good basis
for decisions about radiation.
¡ Thank you!