When estimating sample sizes for clinical trials there are several different views that might be taken as to what definition and meaning should be given to the sought-for treatment effect. However, if the concept of a ‘minimally important difference’ (MID) does have relevance to interpreting clinical trials (which can be disputed) then its value cannot be the same as the ‘clinically relevant difference’ (CRD) that would be used for planning them.
A doubly pernicious use of the MID is as a means of classifying patients as responders and non-responders. Not only does such an analysis lead to an increase in the necessary sample size but it misleads trialists into making causal distinctions that the data cannot support and has been responsible for exaggerating the scope for personalised medicine.
In this talk these statistical points will be explained using a minimum of technical detail.
There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all.
A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions don’t matter much for some purposes and under some circumstances but for others they do.
A yet further issue is that causal analysis in epidemiology, which has brought valuable insights in many cases, has tended to stress point estimates and ignore standard errors. This has potentially misleading consequences.
An understanding of components of variation is key. Unfortunately, the development of two particular topics in recent years, evidence synthesis by the evidence based medicine movement and personalised medicine by bench scientists has either paid scant attention to components of variation or to the questions being asked or both resulting in confusion about many issues.
For instance, it is often claimed that numbers needed to treat indicate the proportion of patients for whom treatments work, that inclusion criteria determine the generalisability of results and that heterogeneity means that a random effects meta-analysis is required. None of these is true. The scope for personalised medicine has very plausibly been exaggerated and an important cause of variation in the healthcare system, physicians, is often overlooked.
I shall argue that thinking about questions is important.
How to combine results from randomised clinical trials on the additive scale with real world data to provide predictions on the clinically relevant scale for individual patients
Clinical trials are about comparability not generalisability V2.pptxStephenSenn2
Lecture delivered at the September 2022 EFSPI meeting in Basle in which I argued that the patients in a clinical trial should not be viewed as being a representative sample of some target population.
When estimating sample sizes for clinical trials there are several different views that might be taken as to what definition and meaning should be given to the sought-for treatment effect. However, if the concept of a ‘minimally important difference’ (MID) does have relevance to interpreting clinical trials (which can be disputed) then its value cannot be the same as the ‘clinically relevant difference’ (CRD) that would be used for planning them.
A doubly pernicious use of the MID is as a means of classifying patients as responders and non-responders. Not only does such an analysis lead to an increase in the necessary sample size but it misleads trialists into making causal distinctions that the data cannot support and has been responsible for exaggerating the scope for personalised medicine.
In this talk these statistical points will be explained using a minimum of technical detail.
There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all.
A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions don’t matter much for some purposes and under some circumstances but for others they do.
A yet further issue is that causal analysis in epidemiology, which has brought valuable insights in many cases, has tended to stress point estimates and ignore standard errors. This has potentially misleading consequences.
An understanding of components of variation is key. Unfortunately, the development of two particular topics in recent years, evidence synthesis by the evidence based medicine movement and personalised medicine by bench scientists has either paid scant attention to components of variation or to the questions being asked or both resulting in confusion about many issues.
For instance, it is often claimed that numbers needed to treat indicate the proportion of patients for whom treatments work, that inclusion criteria determine the generalisability of results and that heterogeneity means that a random effects meta-analysis is required. None of these is true. The scope for personalised medicine has very plausibly been exaggerated and an important cause of variation in the healthcare system, physicians, is often overlooked.
I shall argue that thinking about questions is important.
How to combine results from randomised clinical trials on the additive scale with real world data to provide predictions on the clinically relevant scale for individual patients
Clinical trials are about comparability not generalisability V2.pptxStephenSenn2
Lecture delivered at the September 2022 EFSPI meeting in Basle in which I argued that the patients in a clinical trial should not be viewed as being a representative sample of some target population.
Dichotomania and other challenges for the collaborating biostatisticianLaure Wynants
Conference presentation at ISCB 41 in the session
"Biostatistical inference in practice: moving beyond false
dichotomies"
A comment in Nature, signed by over 800 researchers, called for the scientific community to “retire statistical significance”. The responses included a call to halt the use of the term „statistically significant”, and changes in journal’s author guidelines. The leading discourse among statisticians is that inadequate statistical training of clinical researchers and publishing practices are to blame for the misuse of statistical testing. In this presentation, we search our collective conscience by reviewing ethical guidelines for statisticians in light of the p-value crisis, examine what this implies for us when conducting analyses in collaborative work and teaching, and whether the ATOM (accept uncertainty; be thoughtful, open and modest) principles can guide us.
Critical appraisal is the process of carefully and systematically analyze the research paper to judge its trustworthiness, its value and relevance in a particular context. (Amanda Burls 2009)
A critical review must identify the strengths and limitations in a research paper and this should be carried out in a systematic manner.
The Critical Appraisal helps in developing the necessary skills to make sense of scientific evidence, based on validity, results and relevance.
"Hierarchies of Evidence" is an important but problematic concept for medical professionals to understand as it underpins their capacity to be effective practitioners and researchers.
Minimisation is an approach to allocating patients to treatment in clinical trials that forces a greater degree of balance than does randomisation. Here I explain why I dislike it.
The Stone Clinic is a sports medicine clinic in San Francisco, California, offering orthopaedic surgery and medical care, physical therapy and rehabilitation, and radiology imaging services. The Stone Clinic was founded by Kevin R. Stone, M.D., an orthopaedic surgeon, combining himself with a team of nurses, physical therapists, imaging specialists, and patient coordinators, in 1988 to focus on caring for injured athletes and people experiencing arthritis pain.
The Stone Clinic is founded on the goal of rehabilitating all patients to an operating level higher than before they were injured. The Stone Clinic specializes in sports medicine and injury treatment of knee, shoulder, and ankle joints. Stone has lectured and is recognized internationally as an authority on cartilage and meniscal growth, replacement, and repair. Stone and the Stone Clinic are known for the development of the paste grafting surgical technique in 1991, combined with meniscus replacement, which are biologic joint replacement procedures for the regeneration of the knee joint. Surgical procedures were subjected to rigorous outcomes analysis with the results reported in peer reviewed journals. The surgical techniques have been taught to surgeons in the US and worldwide, through lectures and videos.
Nursing students, medical students, residents, fellows, and other physicians from various institutions around the world, rotate through The Stone Clinic and mentor with Stone. The Stone Clinic hosts the annual Meniscus Transplantation Study Group Meeting as well as the annual Professional Women Athlete's Career Conference.
An epidemiological experiment in which subjects in a population are randomly allocated into groups, usually called study and control groups to receive and not receive an experimental preventive or therapetuic procedure, maneuver, or intervention .
An introduction on how to go about a meta-analysis. Primarily designed for people with non statistical background. Heavily borrows from Cochrane Handbook of Systematic Reviews of Interventions.
Austin Oncology is an open access, peer reviewed, scholarly journal dedicated to publish articles covering all areas of Oncology.
The journal aims to promote research communications and provide a forum for doctors, researchers, physicians and healthcare professionals to find most recent advances in all the areas of Oncology. Austin Oncology accepts original research articles, reviews, mini reviews, case reports and rapid communication covering all aspects of oncology.
Austin Oncology strongly supports the scientific up gradation and fortification in related scientific research community by enhancing access to peer reviewed scientific literary works. Austin Publishing Group also brings universally peer reviewed journals under one roof thereby promoting knowledge sharing, mutual promotion of multidisciplinary science.
Dichotomania and other challenges for the collaborating biostatisticianLaure Wynants
Conference presentation at ISCB 41 in the session
"Biostatistical inference in practice: moving beyond false
dichotomies"
A comment in Nature, signed by over 800 researchers, called for the scientific community to “retire statistical significance”. The responses included a call to halt the use of the term „statistically significant”, and changes in journal’s author guidelines. The leading discourse among statisticians is that inadequate statistical training of clinical researchers and publishing practices are to blame for the misuse of statistical testing. In this presentation, we search our collective conscience by reviewing ethical guidelines for statisticians in light of the p-value crisis, examine what this implies for us when conducting analyses in collaborative work and teaching, and whether the ATOM (accept uncertainty; be thoughtful, open and modest) principles can guide us.
Critical appraisal is the process of carefully and systematically analyze the research paper to judge its trustworthiness, its value and relevance in a particular context. (Amanda Burls 2009)
A critical review must identify the strengths and limitations in a research paper and this should be carried out in a systematic manner.
The Critical Appraisal helps in developing the necessary skills to make sense of scientific evidence, based on validity, results and relevance.
"Hierarchies of Evidence" is an important but problematic concept for medical professionals to understand as it underpins their capacity to be effective practitioners and researchers.
Minimisation is an approach to allocating patients to treatment in clinical trials that forces a greater degree of balance than does randomisation. Here I explain why I dislike it.
The Stone Clinic is a sports medicine clinic in San Francisco, California, offering orthopaedic surgery and medical care, physical therapy and rehabilitation, and radiology imaging services. The Stone Clinic was founded by Kevin R. Stone, M.D., an orthopaedic surgeon, combining himself with a team of nurses, physical therapists, imaging specialists, and patient coordinators, in 1988 to focus on caring for injured athletes and people experiencing arthritis pain.
The Stone Clinic is founded on the goal of rehabilitating all patients to an operating level higher than before they were injured. The Stone Clinic specializes in sports medicine and injury treatment of knee, shoulder, and ankle joints. Stone has lectured and is recognized internationally as an authority on cartilage and meniscal growth, replacement, and repair. Stone and the Stone Clinic are known for the development of the paste grafting surgical technique in 1991, combined with meniscus replacement, which are biologic joint replacement procedures for the regeneration of the knee joint. Surgical procedures were subjected to rigorous outcomes analysis with the results reported in peer reviewed journals. The surgical techniques have been taught to surgeons in the US and worldwide, through lectures and videos.
Nursing students, medical students, residents, fellows, and other physicians from various institutions around the world, rotate through The Stone Clinic and mentor with Stone. The Stone Clinic hosts the annual Meniscus Transplantation Study Group Meeting as well as the annual Professional Women Athlete's Career Conference.
An epidemiological experiment in which subjects in a population are randomly allocated into groups, usually called study and control groups to receive and not receive an experimental preventive or therapetuic procedure, maneuver, or intervention .
An introduction on how to go about a meta-analysis. Primarily designed for people with non statistical background. Heavily borrows from Cochrane Handbook of Systematic Reviews of Interventions.
Austin Oncology is an open access, peer reviewed, scholarly journal dedicated to publish articles covering all areas of Oncology.
The journal aims to promote research communications and provide a forum for doctors, researchers, physicians and healthcare professionals to find most recent advances in all the areas of Oncology. Austin Oncology accepts original research articles, reviews, mini reviews, case reports and rapid communication covering all aspects of oncology.
Austin Oncology strongly supports the scientific up gradation and fortification in related scientific research community by enhancing access to peer reviewed scientific literary works. Austin Publishing Group also brings universally peer reviewed journals under one roof thereby promoting knowledge sharing, mutual promotion of multidisciplinary science.
Omo Linguistica 2002
is a UNI ver SOL science
For increase Sense
science to increase the development of life
and social equilibrium condition
by Universal language
of stimulating
and waking up from the matrix of self-indulgence
self-destruction or disconnection from reality
is a tool to overcome the psychosis
paranoid not call things by their name
of not being oriented to
break the "programming" auto-un-knowledgeable
Experiências de uma vida inspiradora, em um trabalho fantástico e dias incríveis passados na guerra civil de Angola. Amor, dedicação, entrega, compaixão, resiliência e coragem. Vale conferir!
Polling Tingkat Kepuasan Kinerja Pemprov DKI Jakarta 22-24 Februari 2017Rakyat Memilih
Release hasil survei jajak pendapat publik. Survei jajak pendapat secara online ini bertujuan untuk mengetahui tingkat kepuasan warga DKI Jakarta terhadap kinerja pemerintah propinsi DKI Jakarta. Kritik dan saran dapat disampaikan melalui rakyatmemilih@rame.id
Survei ini dibuat oleh Rame.id sebuah aplikasi jajak pendapat dan media analisis. Periode survei 24-28 Februari 2017.
IOS: http://apple.co/2luaU03
Android : bit.ly/rameandroid
Steve Rhodes is the founder, editor and publisher of The Beachwood Reporter, the world's wittiest Chicago-centric new and culture review. Previously: Chicago Tribune, Newsweek, Chicago magazine ... and tons more. A must-read!
We talked about some interesting things that came to php since the release of php 7. Covered the new features in php 7.1 in more detail and took a look at what is known already for php 7.2.
We all know producing quality, user-centered content is essential to SEO success -- but when an organization is low on time, budget, and resources, it can be a challenge. Learn how to implement an effective lean content strategy that will reach the right eyes.
Kaizen Platform Optimization System ArchitectureDaisuke Taniwaki
The optimization system architecture of Kaizen Platform.
I explain the basic optimization system and the issue, then explain our newly introduced "Kaizen Optimization Platform".
Met Dynamics 365 heeft Microsoft een nieuwe generatie zakelijke apps gelanceerd waarin CRM en ERP functionaliteit gebundeld wordt. Maar wat is Dynamics 365 precies en welke mogelijkheden biedt het? In dit geven we u een introductie van deze nieuwe cloud-oplossing. We gaan hierbij o.a. in op wat de introductie van Dynamics 365 betekent voor organisaties die al gebruik maken van Dynamics CRM Online.
Unfortunately, some have interpreted Numbers Needed to Treat as indicating the proportion of patients on whom the treatment has had a causal effect. This interpretation is very rarely, if ever, necessarily correct. It is certainly inappropriate if based on a responder dichotomy. I shall illustrate the problem using simple causal models.
One also sometimes encounters the claim that the extent to which two distributions of outcomes overlap from a clinical trial indicates how many patients benefit. This is also false and can be traced to a similar causal confusion.
Personalised medicine a sceptical viewStephen Senn
Some grounds for believing that the current enthusiasm about personalised medicine is exaggerated, founded on poor statistics and represents a disappointing loss of ambition.
P-values the gold measure of statistical validity are not as reliable as many...David Pratap
This is an article that appeared in the NATURE as News Feature dated 12-February-2014. This article was presented in the journal club at Oman Medical College , Bowshar Campus on December, 17, 2015. This article was presented by Pratap David , Biostatistics Lecturer.
The folly of believing positive findings from underpowered intervention studiesJames Coyne
Presented at the European Health Psychology Conference, July 13, 2013, This slideshow shows the folly of accepting positive findings from underpowered studies. Much of the "evidence" in health psychology comes from such unreliable studies.
Sample size determination in clinical trials is considered from various ethical and practical perspectives. It is concluded that cost is a missing dimension and that the value of information is key.
There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all.
A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions don’t matter much for some purposes and under some circumstances but for others they do.
Course Project Phase Two
Pavel Garbuz
April 12th, 2017
Rasmussen College
1. Confidence interval
1) Confidence interval is a range which is used to provide an estimate of population mean. It is believed that the population mean would be from that range.
2) Point estimate is the value which we derive from a sample to give an estimate about the population.
3) The best point estimate is an estimate about the population mean. We derive this by finding the mean of the sample as a sample represents a population and there for its mean gives the best point estimator of population mean.
4) Normally samples represent a population. In theory it should be the same as a population but in reality the sample may have higher values and may not contain lower values and vice versa thus we don’t get the best point estimators of the population. To cater to this problem we use confidence interval by which we calculate a range in which the population mean is assumed to be present.
2. Best Point estimate of population Mean
BPE = sum of all ages / number or patients
BPE = 3709 / 60
BPE = 62 (61.82 round up)
3. Confidence interval
Confidence interval = 61.82 (+-) Z0.05/2 (S/ ((n)^(1/2))
= 61.82 (+-) 1.96 ( 8.92/ (601/2 )
= 61.82 - 2.257 , 61.82 + 2.257
= 59.56 > X > 64.077
4. Interpretation of confidence interval
In the research we were looking at the significance that old people tend to have particular infectious disease. We constructed a confidence interval to have an idea were the population mean lies. By constructing a confidence interval we have a point estimate of population mean. 95% of the samples would have a mean within the above range which is between 59.56 > age > 64.077.
5. Calculating confidence interval at 99%
61.82 (+-) 2.5758 (8.92/ (601/2)
= 58.85 > age > 64.79
A) Yes I did notice change in the interval as the range has increased as we can see in the calculation above.
B) Now the range for the point estimate has increase and as it at 99% confidence level we can say that its more accurate as 99 sample means would be within this sample. Whereas in 95% confidence we had 95 sample mean in that interval.
G310 Advanced Statistics and Analytics – Option 2
An Application of Statistical Methods
Pavel Garbuz
April 6th, 2017
Rasmussen College
1. Introduction
As a health care professional, this study works to improve and maintain families, and communities in various settings. To understand current problem and its solution this study uses statistical tools to analyze results. The objective of this project is the application of basic statistical tools to a fictional scenario in order to impact the health and wellbeing of the clients being served.
2. Scenario/Problem
You are currently working at NCLEX Memorial Hospital in the Infectious Diseases Unit. Over the past few days, you have noticed an increase in patients admitted with a particular infectious disease. It is believed t.
Statistical significance vs Clinical significanceVini Mehta
esults are said to be "statistically significant" if the probability that the result is compatible with the null hypothesis is very small. Clinical significance, or clinical importance: Is the difference between new and old therapy found in the study large enough for you to alter your practice?
The response to the COVID-19 crisis by various vaccine developers has been extraordinary, both in terms of speed of response and the delivered efficacy of the vaccines. It has also raised some fascinating issues of design, analysis and interpretation. I shall consider some of these issues, taking as my example, five vaccines: Pfizer/BioNTech, AstraZeneca/Oxford, Moderna, Novavax, and J&J Janssen but concentrating mainly on the first two. Among matters covered will be concurrent control, efficient design, issues of measurement raised by two-shot vaccines and implications for roll-out, and the surprising effectiveness of simple analyses. Differences between the five development programmes as they affect statistics will be covered but some essential similarities will also be discussed.
The statistical revolution of the 20th century was largely concerned with developing methods for analysing small datasets. Student’s paper of 1908 was the first in the English literature to address the problem of second order uncertainty (uncertainty about the measures of uncertainty) seriously and was hailed by Fisher as heralding a new age of statistics. Much of what Fisher did was concerned with problems of what might be called ‘small data’, not only as regards efficient analysis but also as regards efficient design and in addition paying close attention to what was necessary to measure uncertainty validly.
I shall consider the history of some of these developments, in particular those that are associated with what might be called the Rothamsted School, starting with Fisher and having its apotheosis in John Nelder’s theory of General Balance and see what lessons they hold for the supposed ‘big data’ revolution of the 21st century.
Talk given at ISCB 2016 Birmingham
For indications and treatments where their use is possible, n-of-1 trials represent a promising means of investigating potential treatments for rare diseases. Each patient permits repeated comparison of the treatments being investigated and this both increases the number of observations and reduces their variability compared to conventional parallel group trials.
However, depending on whether the framework for analysis used is randomisation-based or model- based produces puzzling difference in inferences. This can easily be shown by starting on the one hand with the randomisation philosophy associated with the Rothamsted school of inference and building up the analysis through the block + treatment structure approach associated with John Nelder’s theory of general balance (as implemented in GenStat®) or starting on the other hand with a plausible variance component approach through a mixed model. However, it can be shown that these differences are related not so much to modelling approach per se but to the questions one attempts to answer: ranging from testing whether there was a difference between treatments in the patients studied, to predicting the true difference for a future patient, via making inferences about the effect in the average patient.
This in turn yields interesting insight into the long-run debate over the use of fixed or random effect meta-analysis.
Some practical issues of analysis will also be covered in R and SAS®, in which languages some functions and macros to facilitate analysis have been written. It is concluded that n-of-1 hold great promise in investigating chronic rare diseases but that careful consideration of matters of purpose, design and analysis is necessary to make best use of them.
Acknowledgement
This work is partly supported by the European Union’s 7th Framework Programme for research, technological development and demonstration under grant agreement no. 602552. “IDEAL”
The Seven Habits of Highly Effective StatisticiansStephen Senn
If you know why the title of this talk is extremely stupid, then you clearly know something about control, data and reasoning: in short, you have most of what it takes to be a statistician. If you have studied statistics then you will also know that a large amount of anything, and this includes successful careers, is luck.
In this talk I shall try share some of my experiences of being a statistician in the hope that it will help you make the most of whatever luck life throws you, In so doing, I shall try my best to overcome the distorting influence of that easiest of sciences hindsight. Without giving too much away, I shall be recommending that you read, listen, think, calculate, understand, communicate, and do. I shall give you some example of what I think works and what I think doesn’t
In all of this you should never forget the power of negativity and also the joy of being able to wake up every day and say to yourself ‘I love the small of data in the morning’.
Clinical trials: quo vadis in the age of covid?Stephen Senn
A discussion of the role of clinical trials in the age of COVID. My contribution to the phastar 2020 life sciences summit https://phastar.com/phastar-life-science-summit
It is argued that when it comes to nuisance parameters an assumption of ignorance is harmful. On the other hand this raises problems as to how far one should go in searching for further data when combining evidence.
What should we expect from reproducibiliryStephen Senn
Is there really a reproducibility crisis and if so are P-values to blame? Choose any statistic you like and carry out two identical independent studies and report this statistic for each. In advance of collecting any data, you ought to expect that it is just as likely that statistic 1 will be smaller than statistic 2 as vice versa. Once you have seen statistic 1, things are not so simple but if they are not so simple, it is that you have other information in some form. However, it is at least instructive that you need to be careful in jumping to conclusions about what to expect from reproducibility. Furthermore, the forecasts of good Bayesians ought to obey a Martingale property. On average you should be in the future where you are now but, of course, your inferential random walk may lead to some peregrination before it homes in on “the truth”. But you certainly can’t generally expect that a probability will get smaller as you continue. P-values, like other statistics are a position not a movement. Although often claimed, there is no such things as a trend towards significance.
Using these and other philosophical considerations I shall try and establish what it is we want from reproducibility. I shall conclude that we statisticians should probably be paying more attention to checking that standard errors are being calculated appropriately and rather less to inferential framework.
An early and overlooked causal revolution in statistics was the development of the theory of experimental design, initially associated with the "Rothamstead School". An important stage in the evolution of this theory was the experimental calculus developed by John Nelder in the 1960s with its clear distinction between block and treatment factors in designed experiments. This experimental calculus produced appropriate models automatically from more basic formal considerations but was, unfortunately, only ever implemented in Genstat®, a package widely used in agriculture but rarely so in medical research. In consequence its importance has not been appreciated and the approach of many statistical packages to designed experiments is poor. A key feature of the Rothamsted School approach is that identification of the appropriate components of variation for judging treatment effects is simple and automatic.
The impressive more recent causal revolution in epidemiology, associated with Judea Pearl, seems to have no place for components of variation, however. By considering the application of Nelder’s experimental calculus to Lord’s Paradox, I shall show that this reveals that solutions that have been proposed using the more modern causal calculus are problematic. I shall also show that lessons from designed clinical trials have important implications for the use of historical data and big data more generally.
Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one.
A good case can be made that the Bayesian, de Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of de Finetti’s theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising.
On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the strawman and by ‘disproving’ it, assert the former.
The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests.
Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to ‘prove’ scientific laws.
By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that ‘treatments’ do not differ from each other, I shall show that there are fundamental differences between ‘proving’ and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists.
In Search of Lost Infinities: What is the “n” in big data?Stephen Senn
In designing complex experiments, agricultural scientists, with the help of their statistician collaborators, soon came to realise that variation at different levels had very different consequences for estimating different treatment effects, depending on how the treatments were mapped onto the underlying block structure. This was a key feature of the Rothamsted approach to design and analysis and a strong thread running through the work of Fisher, Yates and Nelder, being expressed in topics such as split-pot designs, recovering inter-block information and fractional factorials. The null block-structure of an experiment is key to this philosophy of design and analysis. However modern techniques for analysing experiments stress models rather than symmetries and this modelling approach requires much greater care in analysis, with the consequence that you can easily make mistakes and often will.
In this talk I shall underline the obvious, but often unintentionally overlooked, fact that understanding variation at the various levels at which it occurs is crucial to analysis. I shall take three examples, an application of John Nelder’s theory of general balance to Lord’s Paradox, the use of historical data in drug development and a hybrid randomised non-randomised clinical trial, the TARGET study, to show that the data that many, including those promoting a so-called causal revolution, assume to be ‘big’ may actually be rather ‘small’. The consequence is that there is a danger that the size of standard errors will be underestimated or even that the appropriate regression coefficients for adjusting for confounding may not be identified correctly.
I conclude that an old but powerful experimental design approach holds important lessons for observational data about limitations in interpretation that mere numbers cannot overcome. Small may be beautiful, after all.
This year marks the 70th anniversary of the Medical Research Council randomised clinical trial (RCT) of streptomycin in tuberculosis led by Bradford Hill. This is widely regarded as a landmark in clinical research. Despite its widespread use in drug regulation and in clinical research more widely and its high standing with the evidence based medicine movement, the RCT continues to attracts criticism. I show that many of these criticisms are traceable to failure to understand two key concepts in statistics: probabilistic inference and design efficiency. To these methodological misunderstandings can be added the practical one of failing to appreciate that entry into clinical trials is not simultaneous but sequential.
I conclude that although randomisation should not be used as an excuse for ignoring prognostic variables, it is valuable and that many standard criticisms of RCTs are invalid.
The Rothamsted school meets Lord's paradoxStephen Senn
Lords ‘paradox’ is a notoriously difficult puzzle that is guaranteed to provoke discussion, dissent and disagreement. Two statisticians analyse some observational data and come to radically different conclusions, each of which has acquired defenders over the years since Lord first proposed his puzzle in 1967. It features in the recent Book of Why by Pearl and McKenzie, who use it to demonstrate the power of Pearl’s causal calculus, obtaining a solution they claim is unambiguously right. They also claim that statisticians have failed to get to grips with causal questions for well over a century, in fact ever since Karl Pearson developed Galton’s idea of correlation and warned the scientific world that correlation is not causation.
However, only two years before Lord published his paradox John Nelder outlined a powerful causal calculus for analyzing designed experiments based on a careful distinction between block and treatment structure. This represents an important advance in formalizing the approach to analysing complex experiments that started with Fisher 100 years ago, when he proposed splitting variability using the square of the standard deviation, which he called the variance, continued with Yates and has been developed since the 1960s by Rosemary Bailey, amongst others. This tradition might be referred to as The Rothamsted School. It is fully implemented in Genstat® but, as far as I am aware, not in any other package.
With the help of Genstat®, I demonstrate how the Rothamsted School would approach Lord’s paradox and come to a solution that is not the same as the one reached by Pearl and McKenzie, although given certain strong but untestable assumptions it would reduce to it. I conclude that the statistical tradition may have more to offer in this respect than has been supposed.
Presidents' invited lecture ISCB Vigo 2017
Discusses various issues to do with how randomised clinical trials should be analysed. See also https://errorstatistics.com/2017/07/01/s-senn-fishing-for-fakes-with-fisher-guest-post/
History of how and why a complex cross-over trial was designed to prove the equivalence of two formulations of a beta-agonist and what the eventual results were. Presented at the Newton Institute 28 July 2008. Warning: following the important paper by Kenward & Roger Biostatistics, 2010, I no longer think the random effects analysis is appropriate, although, in fact the results are pretty much the same as for the fixed effects analysis.
The history of p-values is covered to try and shed light on a mystery: why did Student and Fisher agree numerically but disagree in terms of interpretation.?
Anti ulcer drugs and their Advance pharmacology ||
Anti-ulcer drugs are medications used to prevent and treat ulcers in the stomach and upper part of the small intestine (duodenal ulcers). These ulcers are often caused by an imbalance between stomach acid and the mucosal lining, which protects the stomach lining.
||Scope: Overview of various classes of anti-ulcer drugs, their mechanisms of action, indications, side effects, and clinical considerations.
Ethanol (CH3CH2OH), or beverage alcohol, is a two-carbon alcohol
that is rapidly distributed in the body and brain. Ethanol alters many
neurochemical systems and has rewarding and addictive properties. It
is the oldest recreational drug and likely contributes to more morbidity,
mortality, and public health costs than all illicit drugs combined. The
5th edition of the Diagnostic and Statistical Manual of Mental Disorders
(DSM-5) integrates alcohol abuse and alcohol dependence into a single
disorder called alcohol use disorder (AUD), with mild, moderate,
and severe subclassifications (American Psychiatric Association, 2013).
In the DSM-5, all types of substance abuse and dependence have been
combined into a single substance use disorder (SUD) on a continuum
from mild to severe. A diagnosis of AUD requires that at least two of
the 11 DSM-5 behaviors be present within a 12-month period (mild
AUD: 2–3 criteria; moderate AUD: 4–5 criteria; severe AUD: 6–11 criteria).
The four main behavioral effects of AUD are impaired control over
drinking, negative social consequences, risky use, and altered physiological
effects (tolerance, withdrawal). This chapter presents an overview
of the prevalence and harmful consequences of AUD in the U.S.,
the systemic nature of the disease, neurocircuitry and stages of AUD,
comorbidities, fetal alcohol spectrum disorders, genetic risk factors, and
pharmacotherapies for AUD.
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists Saeid Safari
Preoperative Management of Patients on GLP-1 Receptor Agonists like Ozempic and Semiglutide
ASA GUIDELINE
NYSORA Guideline
2 Case Reports of Gastric Ultrasound
Lung Cancer: Artificial Intelligence, Synergetics, Complex System Analysis, S...Oleg Kshivets
RESULTS: Overall life span (LS) was 2252.1±1742.5 days and cumulative 5-year survival (5YS) reached 73.2%, 10 years – 64.8%, 20 years – 42.5%. 513 LCP lived more than 5 years (LS=3124.6±1525.6 days), 148 LCP – more than 10 years (LS=5054.4±1504.1 days).199 LCP died because of LC (LS=562.7±374.5 days). 5YS of LCP after bi/lobectomies was significantly superior in comparison with LCP after pneumonectomies (78.1% vs.63.7%, P=0.00001 by log-rank test). AT significantly improved 5YS (66.3% vs. 34.8%) (P=0.00000 by log-rank test) only for LCP with N1-2. Cox modeling displayed that 5YS of LCP significantly depended on: phase transition (PT) early-invasive LC in terms of synergetics, PT N0—N12, cell ratio factors (ratio between cancer cells- CC and blood cells subpopulations), G1-3, histology, glucose, AT, blood cell circuit, prothrombin index, heparin tolerance, recalcification time (P=0.000-0.038). Neural networks, genetic algorithm selection and bootstrap simulation revealed relationships between 5YS and PT early-invasive LC (rank=1), PT N0—N12 (rank=2), thrombocytes/CC (3), erythrocytes/CC (4), eosinophils/CC (5), healthy cells/CC (6), lymphocytes/CC (7), segmented neutrophils/CC (8), stick neutrophils/CC (9), monocytes/CC (10); leucocytes/CC (11). Correct prediction of 5YS was 100% by neural networks computing (area under ROC curve=1.0; error=0.0).
CONCLUSIONS: 5YS of LCP after radical procedures significantly depended on: 1) PT early-invasive cancer; 2) PT N0--N12; 3) cell ratio factors; 4) blood cell circuit; 5) biochemical factors; 6) hemostasis system; 7) AT; 8) LC characteristics; 9) LC cell dynamics; 10) surgery type: lobectomy/pneumonectomy; 11) anthropometric data. Optimal diagnosis and treatment strategies for LC are: 1) screening and early detection of LC; 2) availability of experienced thoracic surgeons because of complexity of radical procedures; 3) aggressive en block surgery and adequate lymph node dissection for completeness; 4) precise prediction; 5) adjuvant chemoimmunoradiotherapy for LCP with unfavorable prognosis.
Title: Sense of Smell
Presenter: Dr. Faiza, Assistant Professor of Physiology
Qualifications:
MBBS (Best Graduate, AIMC Lahore)
FCPS Physiology
ICMT, CHPE, DHPE (STMU)
MPH (GC University, Faisalabad)
MBA (Virtual University of Pakistan)
Learning Objectives:
Describe the primary categories of smells and the concept of odor blindness.
Explain the structure and location of the olfactory membrane and mucosa, including the types and roles of cells involved in olfaction.
Describe the pathway and mechanisms of olfactory signal transmission from the olfactory receptors to the brain.
Illustrate the biochemical cascade triggered by odorant binding to olfactory receptors, including the role of G-proteins and second messengers in generating an action potential.
Identify different types of olfactory disorders such as anosmia, hyposmia, hyperosmia, and dysosmia, including their potential causes.
Key Topics:
Olfactory Genes:
3% of the human genome accounts for olfactory genes.
400 genes for odorant receptors.
Olfactory Membrane:
Located in the superior part of the nasal cavity.
Medially: Folds downward along the superior septum.
Laterally: Folds over the superior turbinate and upper surface of the middle turbinate.
Total surface area: 5-10 square centimeters.
Olfactory Mucosa:
Olfactory Cells: Bipolar nerve cells derived from the CNS (100 million), with 4-25 olfactory cilia per cell.
Sustentacular Cells: Produce mucus and maintain ionic and molecular environment.
Basal Cells: Replace worn-out olfactory cells with an average lifespan of 1-2 months.
Bowman’s Gland: Secretes mucus.
Stimulation of Olfactory Cells:
Odorant dissolves in mucus and attaches to receptors on olfactory cilia.
Involves a cascade effect through G-proteins and second messengers, leading to depolarization and action potential generation in the olfactory nerve.
Quality of a Good Odorant:
Small (3-20 Carbon atoms), volatile, water-soluble, and lipid-soluble.
Facilitated by odorant-binding proteins in mucus.
Membrane Potential and Action Potential:
Resting membrane potential: -55mV.
Action potential frequency in the olfactory nerve increases with odorant strength.
Adaptation Towards the Sense of Smell:
Rapid adaptation within the first second, with further slow adaptation.
Psychological adaptation greater than receptor adaptation, involving feedback inhibition from the central nervous system.
Primary Sensations of Smell:
Camphoraceous, Musky, Floral, Pepperminty, Ethereal, Pungent, Putrid.
Odor Detection Threshold:
Examples: Hydrogen sulfide (0.0005 ppm), Methyl-mercaptan (0.002 ppm).
Some toxic substances are odorless at lethal concentrations.
Characteristics of Smell:
Odor blindness for single substances due to lack of appropriate receptor protein.
Behavioral and emotional influences of smell.
Transmission of Olfactory Signals:
From olfactory cells to glomeruli in the olfactory bulb, involving lateral inhibition.
Primitive, less old, and new olfactory systems with different path
Prix Galien International 2024 Forum ProgramLevi Shapiro
June 20, 2024, Prix Galien International and Jerusalem Ethics Forum in ROME. Detailed agenda including panels:
- ADVANCES IN CARDIOLOGY: A NEW PARADIGM IS COMING
- WOMEN’S HEALTH: FERTILITY PRESERVATION
- WHAT’S NEW IN THE TREATMENT OF INFECTIOUS,
ONCOLOGICAL AND INFLAMMATORY SKIN DISEASES?
- ARTIFICIAL INTELLIGENCE AND ETHICS
- GENE THERAPY
- BEYOND BORDERS: GLOBAL INITIATIVES FOR DEMOCRATIZING LIFE SCIENCE TECHNOLOGIES AND PROMOTING ACCESS TO HEALTHCARE
- ETHICAL CHALLENGES IN LIFE SCIENCES
- Prix Galien International Awards Ceremony
These lecture slides, by Dr Sidra Arshad, offer a quick overview of physiological basis of a normal electrocardiogram.
Learning objectives:
1. Define an electrocardiogram (ECG) and electrocardiography
2. Describe how dipoles generated by the heart produce the waveforms of the ECG
3. Describe the components of a normal electrocardiogram of a typical bipolar leads (limb II)
4. Differentiate between intervals and segments
5. Enlist some common indications for obtaining an ECG
Study Resources:
1. Chapter 11, Guyton and Hall Textbook of Medical Physiology, 14th edition
2. Chapter 9, Human Physiology - From Cells to Systems, Lauralee Sherwood, 9th edition
3. Chapter 29, Ganong’s Review of Medical Physiology, 26th edition
4. Electrocardiogram, StatPearls - https://www.ncbi.nlm.nih.gov/books/NBK549803/
5. ECG in Medical Practice by ABM Abdullah, 4th edition
6. ECG Basics, http://www.nataliescasebook.com/tag/e-c-g-basics
2. Acknowledgements
Many thanks for the invitation
This work is partly supported by the European Union’s 7th Framework
Programme for research, technological development and
demonstration under grant agreement no. 602552. “IDEAL”
(c) Stephen Senn 2017 2
3. It seems I could stop the talk here
Warnings
• What I know about quality of life
could be written on the back of
an envelope
• Although I know a lot more
about clinical measures, I dislike
dichotomies
• Many of you will find much to
hate in this talk
• The rest of you may fall asleep
(c) Stephen Senn 2017 3
Minimal important difference
“The smallest difference in score in the domain of
interest which patients perceive as beneficial and
which would mandate, in the absence of troublesome
side effects and excessive cost, a change in the
patient’s management”
Jaeschke et al, 1989
4. Outline
• Differences for planning
• Differences for interpreting treatment effects?
• Individual effects
• Conclusions?
(c) Stephen Senn 2017 4
6. The view (2014)
• Talked about target differences
• Considered two approaches
• A difference considered to be
important
• A realistic difference
• I will cover four, two of which
are similar to the two here
• However, first some statistical
basics
(c) Stephen Senn 2017 6
9. Delta force
What is delta?
• The difference we would like to observe?
• The difference we would like to ‘prove’ obtains ?
• The difference we believe obtains
• The difference you would not like to miss?
(c) Stephen Senn 2017 9
10. The difference you would like to observe
This view is hopeless
if is the value we would like to
observe and if the treatment
does, indeed, have a value of
then we have only half a chance,
not (say) an 80% chance, that the
trial will deliver to us a value as
big as this.
(c) Stephen Senn 2017 10
11. The difference we would like to ‘prove’ obtains ?
This view is even more hopeless
It requires that the lower
confidence interval should be
greater than . This requires using
as a (shifted) null value and
trying to reject this. If this is what
is needed, the power calculation is
completely irrelevant.
(c) Stephen Senn 2017 11
12. The difference we believe obtains
• This is very problematic
• It views the sample size as being a function of the treatment and not
the disease
• It means that for drugs we think work less well we would use bigger
trials
• This seems back to front
• If modified to a Bayesian probability distribution of effects it can be
used to calculate assurance
• This has some use in deciding whether to run a trial
(c) Stephen Senn 2017 12
13. The difference you would not like to miss
• This is the interpretation I favour.
• The idea is that we control two (conditional) errors in the process.
• The first is α, the probability of claiming that a treatment is effective when it
is, in fact, no better than placebo.
• The second is the error of failing to develop a (very) interesting treatment
further.
• If a trial in drug development is not ‘successful’, there is a chance that
the whole development programme will be cancelled.
• It is the conditional probability of cancelling an interesting project
that we seek to control.
(c) Stephen Senn 2017 13
14. But be careful: P=0.05 is a disappointment
• To have a greater that 50%
power for a significant result we
must have that > critical value
• But P=0.05 means the test
statistic is just equal to the
critical value
• Hence the result we see is less
than the clinically relevant
difference
• 70% of if you planned for 80%
power
(c) Stephen Senn 2017 14
16. A plan is not an inference
• You plan so as to have a reasonably low probability of missing an
important effect
• In drug development, if you have a positive result, work goes on
• Furthermore, once the results are in, the plan is largely irrelevant
• You analyse the data you have
• Thus the clinically relevant difference has no direct effect on the
inference
(c) Stephen Senn 2017 16
17. Clinically irrelevant differences
• Often used for so-called active
controlled equivalence studies
• Sponsor tries to show that the new
treatment is not inferior to a standard
by some agreed margin
• Because if the new treatment really is
similar to the existing one, the power
of proving the difference is not 0 is
just the type I error rate
• So we now look to prove that the
new treatment cannot be inferior
by more than an irrelevant amount
(c) Stephen Senn 2017 17
18. The problem
• Consider the example of
hypertension
• A CPMP guideline from 1998
quotes 2mm Hg in diastolic blood
pressure as clinically irrelevant
• A guideline from 2017 defines
response as being normalisation
(95mm to < 90mm) or a 10mm Hg
drop
• So response is 5-10mm Hg and
irrelevance is 2mm Hg
(c) Stephen Senn 2017 18
19. Establishing the minimal important difference
• Clinical or non-clinical anchor
• Mapping to other QoL scores
• For example, single overall
satisfaction question
• Distribution based approach
• For example,
𝜎
2
• Empirical rule
• For example, 8% of theoretical or
empirical range of scores
(c) Stephen Senn 2017 19
20. Walters & Brazier, 2005
• Took 11 study/ population/
follow-up combinations
• Based on 8 studies
• SF-6D (0.29 to 1) and EQ-5D (-
0.59 to 1) were available for
each
• Calculated MID, SD/2, %of
empirical range for both
measures for patients who were
defined as having had a
meaningful response
(c) Stephen Senn 2017 20
22. 22
Significant differences in favor of tiotropium were observed at all time points for
the mean absolute change in the SGRQ total score (ranging from 2.3
to 3.3 units, P<0.001), although the differences on average were below what is
considered to have clinical significance (Fig. 2D). The overall mean
between-group difference in the SGRQ total score at any time point was
2.7 (95% confidence interval [CI], 2.0 to 3.3) in favor of tiotropium
(P<0.001). A higher proportion of patients in the tiotropium group than in
the placebo group had an improvement of 4 units or more in the SGRQ
total scores from baseline at 1 year (49% vs. 41%), 2 years (48% vs. 39%), 3
years (46% vs. 37%), and 4 years (45% vs. 36%) (P<0.001 for all comparisons).
(My emphasis)
From the UPLIFT Study, NEJM, 2008
Tiotropium v Placebo
in Chronic Obstructive Pulmonary Disease
(c) Stephen Senn 2017 22
23. The St George’s Respiratory Questionnaire
SGRQ
• Jones, Quirk, Baveystock, Littlejohn . A self-complete measure of
health status for chronic airflow limitation, American Review of
Respiratory Disease, 145, (6), 1991
• 2466 citation by 2 March 2017
• 76 item questionnaire
• Minimum score 0
• Maximum score 100
• Higher values worse
• Minimum important difference is generally taken to be 4 points
(c) Stephen Senn 2017 23
24. Imagined model
Two Normal
distributions with the
same spread but the
Active treatment has a
mean 2.7 higher.
If this applies, every
patient under active
can be matched to a
corresponding patient
under placebo who is
2.7 worse off
(c) Stephen Senn 2017 24
25. A cumulative plot
corresponding to
the previous
diagram.
If 4 is the threshold,
placebo response
probability is 0.36,
active response
probability is 0.45.
(c) Stephen Senn 2017 25
26. In summary…this is rather silly
• If there is sufficient measurement error even if the true improvement
is identically 2.7, some will show an ‘improvement’ of 4
• The conclusion that there is a higher proportion of true responders by
the standard of 4 points under treatment than under placebo is quite
unwarranted
• So what is the point of analysing ‘responders’?
26(c) Stephen Senn 2017
27. Who are the authors?
1. Tashkin, DP, Celli, B, Senn, S, Burkhart, D, Kesten, S, Menjoge, S,
Decramer, M. A 4-Year Trial of Tiotropium in Chronic Obstructive Pulmonary
Disease, N Engl J Med 2008.
Personal note. I am proud to have been involved in this important study and
have nothing but respect for my collaborators. The fact that, despite the fact
that two of us are statisticians, we have ended up publishing something like
this shows how deeply ingrained the practice of responder analysis is in
medical research. We must do something to change this.
(c) Stephen Senn 2017 27
29. Conclusions
• Responder analysis is an unforgiveable sin
• If used to create a primary variable for analysis it will increase your sample
size by at least a half but usually much more
• It is nearly always accompanied by quite unwarranted causal judgements
• It has led to a lot of nonsense and hype re personalised medicine
• Present the results analysed using the original scale
• Let the reader and others use an MID if they want to interpret these
results
• If you want to go beyond quoting mean effects you need
• Repeated measures
• Very smart statistics
(c) Stephen Senn 2017 29