The statistical revolution of the 20th century was largely concerned with developing methods for analysing small datasets. Student’s paper of 1908 was the first in the English literature to address the problem of second order uncertainty (uncertainty about the measures of uncertainty) seriously and was hailed by Fisher as heralding a new age of statistics. Much of what Fisher did was concerned with problems of what might be called ‘small data’, not only as regards efficient analysis but also as regards efficient design and in addition paying close attention to what was necessary to measure uncertainty validly.
I shall consider the history of some of these developments, in particular those that are associated with what might be called the Rothamsted School, starting with Fisher and having its apotheosis in John Nelder’s theory of General Balance and see what lessons they hold for the supposed ‘big data’ revolution of the 21st century.
Clinical trials: quo vadis in the age of covid?Stephen Senn
A discussion of the role of clinical trials in the age of COVID. My contribution to the phastar 2020 life sciences summit https://phastar.com/phastar-life-science-summit
An early and overlooked causal revolution in statistics was the development of the theory of experimental design, initially associated with the "Rothamstead School". An important stage in the evolution of this theory was the experimental calculus developed by John Nelder in the 1960s with its clear distinction between block and treatment factors in designed experiments. This experimental calculus produced appropriate models automatically from more basic formal considerations but was, unfortunately, only ever implemented in Genstat®, a package widely used in agriculture but rarely so in medical research. In consequence its importance has not been appreciated and the approach of many statistical packages to designed experiments is poor. A key feature of the Rothamsted School approach is that identification of the appropriate components of variation for judging treatment effects is simple and automatic.
The impressive more recent causal revolution in epidemiology, associated with Judea Pearl, seems to have no place for components of variation, however. By considering the application of Nelder’s experimental calculus to Lord’s Paradox, I shall show that this reveals that solutions that have been proposed using the more modern causal calculus are problematic. I shall also show that lessons from designed clinical trials have important implications for the use of historical data and big data more generally.
How to combine results from randomised clinical trials on the additive scale with real world data to provide predictions on the clinically relevant scale for individual patients
The Rothamsted school meets Lord's paradoxStephen Senn
Lords ‘paradox’ is a notoriously difficult puzzle that is guaranteed to provoke discussion, dissent and disagreement. Two statisticians analyse some observational data and come to radically different conclusions, each of which has acquired defenders over the years since Lord first proposed his puzzle in 1967. It features in the recent Book of Why by Pearl and McKenzie, who use it to demonstrate the power of Pearl’s causal calculus, obtaining a solution they claim is unambiguously right. They also claim that statisticians have failed to get to grips with causal questions for well over a century, in fact ever since Karl Pearson developed Galton’s idea of correlation and warned the scientific world that correlation is not causation.
However, only two years before Lord published his paradox John Nelder outlined a powerful causal calculus for analyzing designed experiments based on a careful distinction between block and treatment structure. This represents an important advance in formalizing the approach to analysing complex experiments that started with Fisher 100 years ago, when he proposed splitting variability using the square of the standard deviation, which he called the variance, continued with Yates and has been developed since the 1960s by Rosemary Bailey, amongst others. This tradition might be referred to as The Rothamsted School. It is fully implemented in Genstat® but, as far as I am aware, not in any other package.
With the help of Genstat®, I demonstrate how the Rothamsted School would approach Lord’s paradox and come to a solution that is not the same as the one reached by Pearl and McKenzie, although given certain strong but untestable assumptions it would reduce to it. I conclude that the statistical tradition may have more to offer in this respect than has been supposed.
Clinical trials: quo vadis in the age of covid?Stephen Senn
A discussion of the role of clinical trials in the age of COVID. My contribution to the phastar 2020 life sciences summit https://phastar.com/phastar-life-science-summit
An early and overlooked causal revolution in statistics was the development of the theory of experimental design, initially associated with the "Rothamstead School". An important stage in the evolution of this theory was the experimental calculus developed by John Nelder in the 1960s with its clear distinction between block and treatment factors in designed experiments. This experimental calculus produced appropriate models automatically from more basic formal considerations but was, unfortunately, only ever implemented in Genstat®, a package widely used in agriculture but rarely so in medical research. In consequence its importance has not been appreciated and the approach of many statistical packages to designed experiments is poor. A key feature of the Rothamsted School approach is that identification of the appropriate components of variation for judging treatment effects is simple and automatic.
The impressive more recent causal revolution in epidemiology, associated with Judea Pearl, seems to have no place for components of variation, however. By considering the application of Nelder’s experimental calculus to Lord’s Paradox, I shall show that this reveals that solutions that have been proposed using the more modern causal calculus are problematic. I shall also show that lessons from designed clinical trials have important implications for the use of historical data and big data more generally.
How to combine results from randomised clinical trials on the additive scale with real world data to provide predictions on the clinically relevant scale for individual patients
The Rothamsted school meets Lord's paradoxStephen Senn
Lords ‘paradox’ is a notoriously difficult puzzle that is guaranteed to provoke discussion, dissent and disagreement. Two statisticians analyse some observational data and come to radically different conclusions, each of which has acquired defenders over the years since Lord first proposed his puzzle in 1967. It features in the recent Book of Why by Pearl and McKenzie, who use it to demonstrate the power of Pearl’s causal calculus, obtaining a solution they claim is unambiguously right. They also claim that statisticians have failed to get to grips with causal questions for well over a century, in fact ever since Karl Pearson developed Galton’s idea of correlation and warned the scientific world that correlation is not causation.
However, only two years before Lord published his paradox John Nelder outlined a powerful causal calculus for analyzing designed experiments based on a careful distinction between block and treatment structure. This represents an important advance in formalizing the approach to analysing complex experiments that started with Fisher 100 years ago, when he proposed splitting variability using the square of the standard deviation, which he called the variance, continued with Yates and has been developed since the 1960s by Rosemary Bailey, amongst others. This tradition might be referred to as The Rothamsted School. It is fully implemented in Genstat® but, as far as I am aware, not in any other package.
With the help of Genstat®, I demonstrate how the Rothamsted School would approach Lord’s paradox and come to a solution that is not the same as the one reached by Pearl and McKenzie, although given certain strong but untestable assumptions it would reduce to it. I conclude that the statistical tradition may have more to offer in this respect than has been supposed.
Talk given at ISCB 2016 Birmingham
For indications and treatments where their use is possible, n-of-1 trials represent a promising means of investigating potential treatments for rare diseases. Each patient permits repeated comparison of the treatments being investigated and this both increases the number of observations and reduces their variability compared to conventional parallel group trials.
However, depending on whether the framework for analysis used is randomisation-based or model- based produces puzzling difference in inferences. This can easily be shown by starting on the one hand with the randomisation philosophy associated with the Rothamsted school of inference and building up the analysis through the block + treatment structure approach associated with John Nelder’s theory of general balance (as implemented in GenStat®) or starting on the other hand with a plausible variance component approach through a mixed model. However, it can be shown that these differences are related not so much to modelling approach per se but to the questions one attempts to answer: ranging from testing whether there was a difference between treatments in the patients studied, to predicting the true difference for a future patient, via making inferences about the effect in the average patient.
This in turn yields interesting insight into the long-run debate over the use of fixed or random effect meta-analysis.
Some practical issues of analysis will also be covered in R and SAS®, in which languages some functions and macros to facilitate analysis have been written. It is concluded that n-of-1 hold great promise in investigating chronic rare diseases but that careful consideration of matters of purpose, design and analysis is necessary to make best use of them.
Acknowledgement
This work is partly supported by the European Union’s 7th Framework Programme for research, technological development and demonstration under grant agreement no. 602552. “IDEAL”
Personalised medicine a sceptical viewStephen Senn
Some grounds for believing that the current enthusiasm about personalised medicine is exaggerated, founded on poor statistics and represents a disappointing loss of ambition.
This year marks the 70th anniversary of the Medical Research Council randomised clinical trial (RCT) of streptomycin in tuberculosis led by Bradford Hill. This is widely regarded as a landmark in clinical research. Despite its widespread use in drug regulation and in clinical research more widely and its high standing with the evidence based medicine movement, the RCT continues to attracts criticism. I show that many of these criticisms are traceable to failure to understand two key concepts in statistics: probabilistic inference and design efficiency. To these methodological misunderstandings can be added the practical one of failing to appreciate that entry into clinical trials is not simultaneous but sequential.
I conclude that although randomisation should not be used as an excuse for ignoring prognostic variables, it is valuable and that many standard criticisms of RCTs are invalid.
The response to the COVID-19 crisis by various vaccine developers has been extraordinary, both in terms of speed of response and the delivered efficacy of the vaccines. It has also raised some fascinating issues of design, analysis and interpretation. I shall consider some of these issues, taking as my example, five vaccines: Pfizer/BioNTech, AstraZeneca/Oxford, Moderna, Novavax, and J&J Janssen but concentrating mainly on the first two. Among matters covered will be concurrent control, efficient design, issues of measurement raised by two-shot vaccines and implications for roll-out, and the surprising effectiveness of simple analyses. Differences between the five development programmes as they affect statistics will be covered but some essential similarities will also be discussed.
Unfortunately, some have interpreted Numbers Needed to Treat as indicating the proportion of patients on whom the treatment has had a causal effect. This interpretation is very rarely, if ever, necessarily correct. It is certainly inappropriate if based on a responder dichotomy. I shall illustrate the problem using simple causal models.
One also sometimes encounters the claim that the extent to which two distributions of outcomes overlap from a clinical trial indicates how many patients benefit. This is also false and can be traced to a similar causal confusion.
Presidents' invited lecture ISCB Vigo 2017
Discusses various issues to do with how randomised clinical trials should be analysed. See also https://errorstatistics.com/2017/07/01/s-senn-fishing-for-fakes-with-fisher-guest-post/
When estimating sample sizes for clinical trials there are several different views that might be taken as to what definition and meaning should be given to the sought-for treatment effect. However, if the concept of a ‘minimally important difference’ (MID) does have relevance to interpreting clinical trials (which can be disputed) then its value cannot be the same as the ‘clinically relevant difference’ (CRD) that would be used for planning them.
A doubly pernicious use of the MID is as a means of classifying patients as responders and non-responders. Not only does such an analysis lead to an increase in the necessary sample size but it misleads trialists into making causal distinctions that the data cannot support and has been responsible for exaggerating the scope for personalised medicine.
In this talk these statistical points will be explained using a minimum of technical detail.
Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one.
A good case can be made that the Bayesian, de Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of de Finetti’s theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising.
On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the strawman and by ‘disproving’ it, assert the former.
The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests.
Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to ‘prove’ scientific laws.
By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that ‘treatments’ do not differ from each other, I shall show that there are fundamental differences between ‘proving’ and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists.
The Seven Habits of Highly Effective StatisticiansStephen Senn
If you know why the title of this talk is extremely stupid, then you clearly know something about control, data and reasoning: in short, you have most of what it takes to be a statistician. If you have studied statistics then you will also know that a large amount of anything, and this includes successful careers, is luck.
In this talk I shall try share some of my experiences of being a statistician in the hope that it will help you make the most of whatever luck life throws you, In so doing, I shall try my best to overcome the distorting influence of that easiest of sciences hindsight. Without giving too much away, I shall be recommending that you read, listen, think, calculate, understand, communicate, and do. I shall give you some example of what I think works and what I think doesn’t
In all of this you should never forget the power of negativity and also the joy of being able to wake up every day and say to yourself ‘I love the small of data in the morning’.
There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all.
A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions don’t matter much for some purposes and under some circumstances but for others they do.
There are many valid criticisms of P-values but the criticism that they are largely responsible for the reproducibility crisis has been accepted rather lightly in some quarters. Whatever the inferential statistic that is used, it is quite illogical to assume that as the sample size increases it will tend to show more evidence against the null hypothesis. This applies to Bayesian posterior probabilities as much as it does to P-values. In the context of P-values it can be referred to as the trend towards significance fallacy but more generally, for reasons I shall explain, it could be referred to as the anticipated evidence fallacy.
The anticipated evidence fallacy is itself an example of the overstated evidence fallacy. I shall also discuss this fallacy and other relevant matters affecting reproducible science including the problem of false negatives.
What should we expect from reproducibiliryStephen Senn
Is there really a reproducibility crisis and if so are P-values to blame? Choose any statistic you like and carry out two identical independent studies and report this statistic for each. In advance of collecting any data, you ought to expect that it is just as likely that statistic 1 will be smaller than statistic 2 as vice versa. Once you have seen statistic 1, things are not so simple but if they are not so simple, it is that you have other information in some form. However, it is at least instructive that you need to be careful in jumping to conclusions about what to expect from reproducibility. Furthermore, the forecasts of good Bayesians ought to obey a Martingale property. On average you should be in the future where you are now but, of course, your inferential random walk may lead to some peregrination before it homes in on “the truth”. But you certainly can’t generally expect that a probability will get smaller as you continue. P-values, like other statistics are a position not a movement. Although often claimed, there is no such things as a trend towards significance.
Using these and other philosophical considerations I shall try and establish what it is we want from reproducibility. I shall conclude that we statisticians should probably be paying more attention to checking that standard errors are being calculated appropriately and rather less to inferential framework.
Talk given at RSS 2016 Manchester
I consider the problems that the ASA faced in getting a P-value statement together, not in terms of the process, but by looking at the expressed opinion of 21 published commentaries of the agreed statement. I then trace the history of the development of P-values. I show that the perceived problem with P-values in not just one of a supposed inadequacy of frequentist statistics but reflects a struggle at the very heart of Bayesian inference. I conclude that replacing P-values by automatic Bayesian approaches is unlikely to abolish controversy. It may be better to try and embrace diversity than to pretend it is not there.
The Rothamsted School & The analysis of designed experimentsStephenSenn2
A historical account is given of the approach of "The Rothamsted School" to the analysis of designed experiments. The link between the way that experiments are designed and how they should be analysed is fundamental to this approach. The key figures are RA Fisher, Frank Yates and John Nelder
There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all.
A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions don’t matter much for some purposes and under some circumstances but for others they do.
A yet further issue is that causal analysis in epidemiology, which has brought valuable insights in many cases, has tended to stress point estimates and ignore standard errors. This has potentially misleading consequences.
An understanding of components of variation is key. Unfortunately, the development of two particular topics in recent years, evidence synthesis by the evidence based medicine movement and personalised medicine by bench scientists has either paid scant attention to components of variation or to the questions being asked or both resulting in confusion about many issues.
For instance, it is often claimed that numbers needed to treat indicate the proportion of patients for whom treatments work, that inclusion criteria determine the generalisability of results and that heterogeneity means that a random effects meta-analysis is required. None of these is true. The scope for personalised medicine has very plausibly been exaggerated and an important cause of variation in the healthcare system, physicians, is often overlooked.
I shall argue that thinking about questions is important.
Talk given at ISCB 2016 Birmingham
For indications and treatments where their use is possible, n-of-1 trials represent a promising means of investigating potential treatments for rare diseases. Each patient permits repeated comparison of the treatments being investigated and this both increases the number of observations and reduces their variability compared to conventional parallel group trials.
However, depending on whether the framework for analysis used is randomisation-based or model- based produces puzzling difference in inferences. This can easily be shown by starting on the one hand with the randomisation philosophy associated with the Rothamsted school of inference and building up the analysis through the block + treatment structure approach associated with John Nelder’s theory of general balance (as implemented in GenStat®) or starting on the other hand with a plausible variance component approach through a mixed model. However, it can be shown that these differences are related not so much to modelling approach per se but to the questions one attempts to answer: ranging from testing whether there was a difference between treatments in the patients studied, to predicting the true difference for a future patient, via making inferences about the effect in the average patient.
This in turn yields interesting insight into the long-run debate over the use of fixed or random effect meta-analysis.
Some practical issues of analysis will also be covered in R and SAS®, in which languages some functions and macros to facilitate analysis have been written. It is concluded that n-of-1 hold great promise in investigating chronic rare diseases but that careful consideration of matters of purpose, design and analysis is necessary to make best use of them.
Acknowledgement
This work is partly supported by the European Union’s 7th Framework Programme for research, technological development and demonstration under grant agreement no. 602552. “IDEAL”
Personalised medicine a sceptical viewStephen Senn
Some grounds for believing that the current enthusiasm about personalised medicine is exaggerated, founded on poor statistics and represents a disappointing loss of ambition.
This year marks the 70th anniversary of the Medical Research Council randomised clinical trial (RCT) of streptomycin in tuberculosis led by Bradford Hill. This is widely regarded as a landmark in clinical research. Despite its widespread use in drug regulation and in clinical research more widely and its high standing with the evidence based medicine movement, the RCT continues to attracts criticism. I show that many of these criticisms are traceable to failure to understand two key concepts in statistics: probabilistic inference and design efficiency. To these methodological misunderstandings can be added the practical one of failing to appreciate that entry into clinical trials is not simultaneous but sequential.
I conclude that although randomisation should not be used as an excuse for ignoring prognostic variables, it is valuable and that many standard criticisms of RCTs are invalid.
The response to the COVID-19 crisis by various vaccine developers has been extraordinary, both in terms of speed of response and the delivered efficacy of the vaccines. It has also raised some fascinating issues of design, analysis and interpretation. I shall consider some of these issues, taking as my example, five vaccines: Pfizer/BioNTech, AstraZeneca/Oxford, Moderna, Novavax, and J&J Janssen but concentrating mainly on the first two. Among matters covered will be concurrent control, efficient design, issues of measurement raised by two-shot vaccines and implications for roll-out, and the surprising effectiveness of simple analyses. Differences between the five development programmes as they affect statistics will be covered but some essential similarities will also be discussed.
Unfortunately, some have interpreted Numbers Needed to Treat as indicating the proportion of patients on whom the treatment has had a causal effect. This interpretation is very rarely, if ever, necessarily correct. It is certainly inappropriate if based on a responder dichotomy. I shall illustrate the problem using simple causal models.
One also sometimes encounters the claim that the extent to which two distributions of outcomes overlap from a clinical trial indicates how many patients benefit. This is also false and can be traced to a similar causal confusion.
Presidents' invited lecture ISCB Vigo 2017
Discusses various issues to do with how randomised clinical trials should be analysed. See also https://errorstatistics.com/2017/07/01/s-senn-fishing-for-fakes-with-fisher-guest-post/
When estimating sample sizes for clinical trials there are several different views that might be taken as to what definition and meaning should be given to the sought-for treatment effect. However, if the concept of a ‘minimally important difference’ (MID) does have relevance to interpreting clinical trials (which can be disputed) then its value cannot be the same as the ‘clinically relevant difference’ (CRD) that would be used for planning them.
A doubly pernicious use of the MID is as a means of classifying patients as responders and non-responders. Not only does such an analysis lead to an increase in the necessary sample size but it misleads trialists into making causal distinctions that the data cannot support and has been responsible for exaggerating the scope for personalised medicine.
In this talk these statistical points will be explained using a minimum of technical detail.
Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one.
A good case can be made that the Bayesian, de Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of de Finetti’s theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising.
On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the strawman and by ‘disproving’ it, assert the former.
The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests.
Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to ‘prove’ scientific laws.
By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that ‘treatments’ do not differ from each other, I shall show that there are fundamental differences between ‘proving’ and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists.
The Seven Habits of Highly Effective StatisticiansStephen Senn
If you know why the title of this talk is extremely stupid, then you clearly know something about control, data and reasoning: in short, you have most of what it takes to be a statistician. If you have studied statistics then you will also know that a large amount of anything, and this includes successful careers, is luck.
In this talk I shall try share some of my experiences of being a statistician in the hope that it will help you make the most of whatever luck life throws you, In so doing, I shall try my best to overcome the distorting influence of that easiest of sciences hindsight. Without giving too much away, I shall be recommending that you read, listen, think, calculate, understand, communicate, and do. I shall give you some example of what I think works and what I think doesn’t
In all of this you should never forget the power of negativity and also the joy of being able to wake up every day and say to yourself ‘I love the small of data in the morning’.
There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all.
A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions don’t matter much for some purposes and under some circumstances but for others they do.
There are many valid criticisms of P-values but the criticism that they are largely responsible for the reproducibility crisis has been accepted rather lightly in some quarters. Whatever the inferential statistic that is used, it is quite illogical to assume that as the sample size increases it will tend to show more evidence against the null hypothesis. This applies to Bayesian posterior probabilities as much as it does to P-values. In the context of P-values it can be referred to as the trend towards significance fallacy but more generally, for reasons I shall explain, it could be referred to as the anticipated evidence fallacy.
The anticipated evidence fallacy is itself an example of the overstated evidence fallacy. I shall also discuss this fallacy and other relevant matters affecting reproducible science including the problem of false negatives.
What should we expect from reproducibiliryStephen Senn
Is there really a reproducibility crisis and if so are P-values to blame? Choose any statistic you like and carry out two identical independent studies and report this statistic for each. In advance of collecting any data, you ought to expect that it is just as likely that statistic 1 will be smaller than statistic 2 as vice versa. Once you have seen statistic 1, things are not so simple but if they are not so simple, it is that you have other information in some form. However, it is at least instructive that you need to be careful in jumping to conclusions about what to expect from reproducibility. Furthermore, the forecasts of good Bayesians ought to obey a Martingale property. On average you should be in the future where you are now but, of course, your inferential random walk may lead to some peregrination before it homes in on “the truth”. But you certainly can’t generally expect that a probability will get smaller as you continue. P-values, like other statistics are a position not a movement. Although often claimed, there is no such things as a trend towards significance.
Using these and other philosophical considerations I shall try and establish what it is we want from reproducibility. I shall conclude that we statisticians should probably be paying more attention to checking that standard errors are being calculated appropriately and rather less to inferential framework.
Talk given at RSS 2016 Manchester
I consider the problems that the ASA faced in getting a P-value statement together, not in terms of the process, but by looking at the expressed opinion of 21 published commentaries of the agreed statement. I then trace the history of the development of P-values. I show that the perceived problem with P-values in not just one of a supposed inadequacy of frequentist statistics but reflects a struggle at the very heart of Bayesian inference. I conclude that replacing P-values by automatic Bayesian approaches is unlikely to abolish controversy. It may be better to try and embrace diversity than to pretend it is not there.
The Rothamsted School & The analysis of designed experimentsStephenSenn2
A historical account is given of the approach of "The Rothamsted School" to the analysis of designed experiments. The link between the way that experiments are designed and how they should be analysed is fundamental to this approach. The key figures are RA Fisher, Frank Yates and John Nelder
There are many questions one might ask of a clinical trial, ranging from what was the effect in the patients studied to what might the effect be in future patients via what was the effect in individual patients? The extent to which the answer to these questions is similar depends on various assumptions made and in some cases the design used may not permit any meaningful answer to be given at all.
A related issue is confusion between randomisation, random sampling, linear model and true multivariate based modelling. These distinctions don’t matter much for some purposes and under some circumstances but for others they do.
A yet further issue is that causal analysis in epidemiology, which has brought valuable insights in many cases, has tended to stress point estimates and ignore standard errors. This has potentially misleading consequences.
An understanding of components of variation is key. Unfortunately, the development of two particular topics in recent years, evidence synthesis by the evidence based medicine movement and personalised medicine by bench scientists has either paid scant attention to components of variation or to the questions being asked or both resulting in confusion about many issues.
For instance, it is often claimed that numbers needed to treat indicate the proportion of patients for whom treatments work, that inclusion criteria determine the generalisability of results and that heterogeneity means that a random effects meta-analysis is required. None of these is true. The scope for personalised medicine has very plausibly been exaggerated and an important cause of variation in the healthcare system, physicians, is often overlooked.
I shall argue that thinking about questions is important.
Clinical trials are about comparability not generalisability V2.pptxStephenSenn3
It is a fundamental but common mistake to regard clinical trials as being a form of representative inference. The key issue is comparability. Experiments do not involve typical material. In clinical trials; it is concurrent control that is key and randomisation is a device for calculating standard errors appropriately that should reflect the design.
Generalisation beyond the clinical trial always involves theory.
Clinical trials are about comparability not generalisability V2.pptxStephenSenn2
Lecture delivered at the September 2022 EFSPI meeting in Basle in which I argued that the patients in a clinical trial should not be viewed as being a representative sample of some target population.
Sample size determination in clinical trials is considered from various ethical and practical perspectives. It is concluded that cost is a missing dimension and that the value of information is key.
Study Designs Cohort StudiesChapter 7Learning.docxdeanmtaylor1545
Study Designs: Cohort
Studies
Chapter 7
Learning Objectives
• Differentiate cohort studies from other
study designs
• List main characteristics, advantages, and
disadvantages of cohort studies
• Describe three research questions that
lend themselves to cohort studies
• Calculate and interpret a relative risk
• Give three examples of published studies
discussed in this chapter
Temporality
• Temporality refers to the timing of
information about cause and effect.
• Did the information about cause and effect
refer to the same point in time?
• Or, was the information about the cause
garnered before or after the information
about the effect?
Limitations of Other Study
Designs
• Demonstrating temporality is a
difficulty of most observational
studies.
Limitations of Other Study
Designs (cont’d)
• Cross-sectional and case-control study
designs are based on exposure and
disease information that is collected at the
same time.
• Advantage: Efficient for generating and
testing hypotheses.
• Disadvantage: Leads to challenges
regarding interpretation of results.
Limitations of Other Study
Designs (cont’d)
• Cross-sectional studies:
– Present difficulties in distinguishing the
exposures from the outcomes of the disease,
especially if the outcome marker is a
biological or physiological parameter.
Limitations of Other Study
Designs (cont’d)
• Case-control studies:
– Raise concerns that recall of past
exposures differs between cases and
controls.
Limitations of Other Study
Designs (cont’d)
• There has been no actual lapse of time
between measurement of exposure and
disease.
• None of the previous study designs is well
suited for uncommon exposures.
What is a cohort?
• A cohort is defined as a population group,
or subset thereof, that is followed over a
period of time.
• The term cohort is said to originate from
the Latin cohors, which referred to one of
ten divisions of an ancient Roman legion.
What is a cohort? (cont’d)
• Cohort group members experience a
common exposure associated with a
specific setting (e.g., an occupational
cohort or a school cohort) or they share a
non-specific exposure associated with a
general classification (e.g., a birth
cohort—being born in the same year or
era).
Cohort Effect
• The influence of membership in a particular
cohort.
• Example: Tobacco use in the U.S.
– Fewer than 5% of population smoked around the
early 1900s.
– Free cigarettes for WWI troops increased
prevalence of smoking in the population.
– During WWI, age of onset varied greatly; then
people began smoking earlier in life.
– One net effect was a shift in the distribution of the
age of onset of lung cancer.
Cohort Analysis
• The tabulation and analysis of morbidity or
mortality rates in relationship to the ages
of a specific group of people (cohort)
identified at a particular period of time and
followed as they pass thro.
In Search of Lost Infinities: What is the “n” in big data?Stephen Senn
In designing complex experiments, agricultural scientists, with the help of their statistician collaborators, soon came to realise that variation at different levels had very different consequences for estimating different treatment effects, depending on how the treatments were mapped onto the underlying block structure. This was a key feature of the Rothamsted approach to design and analysis and a strong thread running through the work of Fisher, Yates and Nelder, being expressed in topics such as split-pot designs, recovering inter-block information and fractional factorials. The null block-structure of an experiment is key to this philosophy of design and analysis. However modern techniques for analysing experiments stress models rather than symmetries and this modelling approach requires much greater care in analysis, with the consequence that you can easily make mistakes and often will.
In this talk I shall underline the obvious, but often unintentionally overlooked, fact that understanding variation at the various levels at which it occurs is crucial to analysis. I shall take three examples, an application of John Nelder’s theory of general balance to Lord’s Paradox, the use of historical data in drug development and a hybrid randomised non-randomised clinical trial, the TARGET study, to show that the data that many, including those promoting a so-called causal revolution, assume to be ‘big’ may actually be rather ‘small’. The consequence is that there is a danger that the size of standard errors will be underestimated or even that the appropriate regression coefficients for adjusting for confounding may not be identified correctly.
I conclude that an old but powerful experimental design approach holds important lessons for observational data about limitations in interpretation that mere numbers cannot overcome. Small may be beautiful, after all.
It is argued that when it comes to nuisance parameters an assumption of ignorance is harmful. On the other hand this raises problems as to how far one should go in searching for further data when combining evidence.
History of how and why a complex cross-over trial was designed to prove the equivalence of two formulations of a beta-agonist and what the eventual results were. Presented at the Newton Institute 28 July 2008. Warning: following the important paper by Kenward & Roger Biostatistics, 2010, I no longer think the random effects analysis is appropriate, although, in fact the results are pretty much the same as for the fixed effects analysis.
Minimisation is an approach to allocating patients to treatment in clinical trials that forces a greater degree of balance than does randomisation. Here I explain why I dislike it.
The history of p-values is covered to try and shed light on a mystery: why did Student and Fisher agree numerically but disagree in terms of interpretation.?
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Adjusting primitives for graph : SHORT REPORT / NOTES
To infinity and beyond v2
1. To Infinity and Beyond
How ‘big’ are your data, really ?
Stephen Senn
Consultant Statistician
Edinburgh
(c) Stephen Senn 1
stephen@senns.uk
2. Acknowledgements
Many thanks for the invitation
This work is partly supported by the European Union’s 7th Framework
Programme for research, technological development and
demonstration under grant agreement no. 602552. “IDEAL”
(c) Stephen Senn 2(c) Stephen Senn 2
3. Outline
Part I (Not so technical and shorter)
• The roots of modern statistics
• Small data
• Careful design of experiments
• Some examples of problems with
judging causality from associations
in the health care field
• Two different objectives of clinical
trials
Part II (More technical and longer)
• Design
• The Rothamsted (Genstat)
approach
• Some statistical issues
• Conclusion
(c) Stephen Senn 3
4. Basic Thesis
• We know that there is a close and fundamental relationship between
how experiments are designed and how they should be analysed
• This should make us worry whenever we have to analyse data that
are not from a carefully designed study, however big that study may
be
• We should be sceptical of many of the claims we hear for the power
of ‘big data’
(c) Stephen Senn 4
5. Part I
Less technical matter to do with history of statistics and basic ‘philosophical’
considerations
(c) Stephen Senn 5
7. (c) Stephen Senn 7
William Sealy Gosset
1876-1937
• Born Canterbury 1876
• Educated Winchester and Oxford
• First in mathematical moderations 1897
and first in degree in Chemistry 1899
• Starts with Guinness in 1899 in Dublin
• Autumn 1906-spring 1907 with Karl
Pearson at UCL
• 1908 publishes ‘The probable error of a
mean’
• First method available to judge
‘significance’ in small samples
8. (c) Stephen Senn 8
Ronald Aylmer Fisher
1890-1962
• Most influential statistician ever
• Also major figure in evolutionary
biology
• Educated Harrow and Cambridge
• Statistician at Rothamsted agricultural
station 1919-1933
• Developed theory of small sample
inference and many modern concepts
• Likelihood, variance, sufficiency, ANOVA
• Developed theory of experimental
design
• Blocking, Randomisation, Replication,
9. Small data challenges
Situation Problem Solution
Sample size small Too few data to estimate variance
adequately
Develop small sample test
(Student)
Experimental material not
homogenous
Dealing with variability Blocking and randomisation
(Fisher)
Limited time (1) How to study more than one thing Complex treatment structure
factorial experiments (Fisher, Yates)
Limited time (2) How to study very many factors Fractional factorials. (Yates)
Experimental material varies at
different levels
Some treatments can be varied at
lowest level but not all
General balance approach to
analysis (Nelder)
(c) Stephen Senn 9
10. Characteristics of development of statistics in
the first half of the 20th century
• Numerical work was arduous and long
• Human computers
• Desk calculators
• Careful thought as to how to perform a calculation paid dividends
• Much development of inferential theory for small samples
• Design of experiments became a new subject in its own right developed by
statisticians
• Orthogonality
• Made calculation easier (eg decomposition of variance terms in ANOVA)
• Increased efficiency
• Randomisation
• “Guaranteed” properties of statistical analysis
• Dealt with hidden confounders
• Factorial experimentation
• Efficient way to study multiple influences
(c) Stephen Senn 10
11. TARGET study
• Trial of more than 18,000
patients in osteoarthritis over
one year or more
• Two sub-studies
• Lumiracoxib v ibuprofen
• Lumiracoxib v naproxen
• Stratified by aspirin use or not
• Has some features of a
randomised trial but also
some of a non-randomised
study
(c) Stephen Senn 11
12. Data Filtering Some Examples
Finding
• Oscar winners lived longer than actors who
didn’t win an Oscar
• A 20 year follow-up study of women in an
English village found higher survival amongst
smokers than non-smokers
• Transplant receivers on highest doses of
cyclosporine had higher probability of graft
rejection than on lower doses
• Left-handers observed to die younger on
average than right-handers
• Obese infarct survivors have better prognosis
than non-obese
Possible Explanation
• The longer you live the greater your
chance of winning
• The smokers were from more recent
generations. They were much younger
than non-smokers
• The anticipated transplant rejection was
the cause of the dose being increased
• In an earlier era left-handers were forced
to become right-handers
• There are two kinds of infarct: very
serious which is independent of weight
and less serious linked to obesity.
(c) Stephen Senn 12
13. Morals
• What you don’t see can be important
• Where you have not been able to run trials, biases
can be very important
• TARGET study provides a strong warning
• Observational studies show that alternative explanations
are possible
• For some purposes just piling on data does not really
help
• What helps are
• Careful design
• Thinking!
(c) Stephen Senn 13
14. We tend to believe “the truth is in
there”, but sometimes it isn’t and
the danger is we will find it
anyway
(c) Stephen Senn 14
15. Causal versus predictive inference
• Clinical trials can be used to try and answer a number of very
different questions
• Two examples are
• Did the treatment have an effect in these patients?
• A causal purpose
• What will the effect be in future patients?
• A predictive purpose
• Unfortunately, in practice, an answer is produced without stating
what the question was
• Given certain assumptions these questions can be answered using the
same analysis but the assumptions are strong and rarely stated
(c) Stephen Senn 15
16. Two models
Predictive
• The population is taken to be ‘patients in
general’
• Of course this really means future
patients
• They are the ones to whom the
treatment will be applied
• We treat the patients in the trial as an
appropriate selection from this population
• This does not require them to be typical
but it does require additivity of the
treatment effect
Causal
• We take the patients as fixed
• We want to know what the effect was for
them
• Unfortunately there are missing
counterfactuals
• What would have happened to control
patients given intervention and vice-versa
• The population is the population of all
possible allocations to the patients studied
(c) Stephen Senn 16
19. Trial in asthma
Basic situation
• Two beta-agonists compared
• Zephyr(Z) and Mistral(M)
• Block structure has several levels
• Different designs will be investigated
• Cluster
• Parallel group
• Cross-over Trial
• Each design will be blocked at a different
level
• NB Each design will collect
6 x 4 x 2 x 7 = 336 measurements of Forced
Expiratory Volume in one second (FEV1)
Block structure
Level Number
within higher
level
Total
Number
Centre 6 6
Patient 4 24
Episodes 2 48
Measurements 7 336
(c) Stephen Senn 19
20. Block structure
• Patients are nested with centres
• Episodes are nested within patients
• Measurements are nested within
episodes
• Centres/Patients/Episodes/Measurements
(c) Stephen Senn 20
Measurements not shown
21. Possible designs
• Cluster randomised
• In each centre all the patients either receive Zephyr (Z) or Mistral (M) in both
episodes
• Three centres are chosen at random to receive Z and three to receive M
• Parallel group trial
• In each centre half the patients receive Z and half M in both episodes
• Two patients per centre are randomly chosen to receive Z and two to receive
M
• Cross-over trial
• For each patient the patient receives M in one episode and Z in another
• The order of allocation, ZM or MZ is random
(c) Stephen Senn 21
25. Null (skeleton) analysis of variance with Genstat ®
Code Output
(c) Stephen Senn 25
BLOCKSTRUCTURE Centre/Patient/Episode/Measurement
ANOVA
26. Full (skeleton) analysis of variance with Genstat ®
Additional Code Output
(c) Stephen Senn 26
TREATMENTSTRUCTURE Design[]
ANOVA
(Here Design[] is a pointer with values corresponding
to each of the three designs.)
27. The bottom line
• The approach recognises that things vary
• Centres, patients episodes
• It does not require everything to be balanced
• Things that can be eliminated will be eliminated by design
• Cross-over trial eliminates patients and centres
• Parallel group trial eliminates centres
• Cluster randomised eliminates none of these
• The measure of uncertainty produced by the analysis will reflected what
cannot be eliminated
• This requires matching the analysis to the design
• Note that Genstat® deals with this formally and automatically. Other
packages do not.
(c) Stephen Senn 27
28. (c) Stephen Senn 28
To call in the statistician after
the experiment is done may be
no more than asking him to
perform a post-mortem
examination: he may be able
to say what the experiment
died of
RA Fisher
29. A genuine example ( a real trial)
Hills and Armitage 1979
• A cross-over trial of enuresis
• Patients randomised to one of two sequences
• Active treatment in period 1 followed by placebo in period 2
• Placebo in period 1 followed by active treatment in period 2
• Treatment periods were 14 days long
• Number of dry nights measured
(c) Stephen Senn 29
30. Important points to note
• Because every patient acts as his own control all patient level
covariates (of which there could be thousands and thousands) are
perfectly balanced
• Differences in these covariates can have no effect on the difference
between results under treatment and the results under placebo
• However, period level covariates (changes within the lives of patients)
could have an effect
• My normal practice is to fit a period effect as well as patients effects,
however, I shall omit doing so to simplify
• The parametric analysis then reduces to what is sometimes called a
matched pairs t-test
(c) Stephen Senn 30
31. Cross-over trial in
Enuresis
Two treatment periods of
14 days each
1. Hills, M, Armitage, P. The two-period
cross-over clinical trial, British Journal of Clinical
Pharmacology 1979; 8: 7-20.
(c) Stephen Senn 31
32. Two Parametric Approaches
Not fitting patient effect
Estimate s.e. t(56) t pr.
2.172 0.964 2.25 0.0282
Fitting patient effect
Estimate s.e. t(28) t pr
.
2.172 0.616 3.53 0.00147
(c) Stephen Senn
Note that ignoring the patient effect, the P-value is less impressive and the standard
error is larger
The method posts higher uncertainty because unlike the within-patient analysis it make
no assumption that the patient level covariates are balanced.
Of course, in this case, since we know the patient level covariates are balanced, this
analysis is wrong
32
33. Blue diamond shows
treatment effect whether
we condition on patient or
not as a factor.
It is identical because the
trial is balanced by patient.
However the permutation
distribution is quite different
and our inferences are
different whether we
condition (red) or not
(black) and clearly
balancing the randomisation
by patient and not
conditioning the analysis by
patient is wrong
(c) Stephen Senn 33
34. The two permutation* distributions summarised
Summary statistics for Permuted difference no
blocking
Number of observations = 10000
• Mean = -0.00319
• Median = -0.0345
• Minimum = -3.621
• Maximum = 3.690
• Lower quartile = -0.655
• Upper quartile = 0.655
Standard deviation = 0.993
P-value for observed difference 0.0344
(Parametric P-value 0.0282)
*Strictly speaking, these are randomisation
distributions
Summary statistics for Permuted difference
blocking
Number of observations = 10000
• Mean = -0.00339
• Median = 0.0345
• Minimum = -2.793
• Maximum = 2.517
• Lower quartile = -0.517
• Upper quartile = 0.517
P-value for observed difference 0.001
(Parametric P-value 0.00147)
(c) Stephen Senn 34
35. What happens if you balance but don’t
condition?
Approach Variance of estimated
treatment effect over all
randomisations*
Mean of estimated
variance of treatment
effect over all
randomisations*
Completely randomised
Analysed as such
0.987 0.996
Randomised within-patient
Analysed as such
0.534 0.529
Randomised within-patient
Analysed as completely
randomised
0.534 1.005
*Based on 10000 random permutations
(c) Stephen Senn 35
That is to say, permute values respecting the fact that they come from a cross-over but analyse them as if
they came from a parallel group trial
36. 36
The difference between
mathematical and applied
statistics is that the former is full
of lemmas whereas the latter is
full of dilemmas
(c) Stephen Senn
37. The Shocking Truth
• The validity of conventional analysis of randomised trials does not
depend on covariate balance
• It is valid because they are not perfectly balanced
• An allowance is already made for things being unbalanced
• If they were balanced the standard analysis would be wrong
• Like an insurance broker forbidding you to travel abroad in the policy but
calculating your premiums on the assumption that you will
• This accounts for unobserved covariates. What happens when they
are observed?
(c) Stephen Senn 2019 37
38. (c) Stephen Senn 2019
• Two dice are rolled
– Red die
– Black die
• You have to call correctly the probability of a total score of 10
• Three variants
– Game 1 You call the probability and the dice are rolled
together
– Game 2 the red die is rolled first, you are shown the score
and then must call the probability
– Game 3 the red die is rolled first, you are not shown the
score and then must call the probability
Game of Chance
38
39. (c) Stephen Senn 2019
Total Score when Rolling Two Dice
Variant 1. Three of 36 equally likely results give a 10. The probability is 3/36=1/12.
39
40. (c) Stephen Senn 2019
Variant 2: If the red die score is 1,2 or 3, the probability of a total of10 is 0.
If the red die score is 4,5 or 6, the probability of a total of10 is 1/6.
Variant 3: The probability = (½ x 0) + (½ x 1/6) = 1/12
Total Score when Rolling Two Dice
40
41. The morals
Dice games
• You can’t treat game 2 like game 1
• You must condition on the information
received
• You must use the actual data from the red die
• You can treat game 3 like game 1
• You can use the distribution in probability
that the red die has
Inference in general
• You can’t use the random behavior of
a system to justify ignoring
information that arises from the
system
• That would be to treat game 2 like game 1
• You can use the random behavior of
the system to justify ignoring that
which has not been seen
• You are entitled to treat game 3 like game 1
(c) Stephen Senn 2019 41
42. What does the Rothamsted approach do?
• Matches the allocation procedure to the analysis. You can either
regard this as meaning
• The randomisation you carried out guides the analysis
• The analysis you intend guides the randomisation
• Or both
• Either way, the idea is to avoid inconsistency
• Regarding something as being very important at the allocation stage but not
at the analysis stage is inconsistent
• Permits you not only to take account of things seen but also to make
an appropriate allowance for things unseen
• Die analogy is that it makes sure that the game is a fair one
(c) Stephen Senn 42
43. A simulating example
• I am going to simulate 200 clinical trials
• Trials are of a bronchodilator against placebo.
• Simple randomisation of 50 patients to each arm
• I shall have values at outcome and values at baseline
• Forced expiratory volume in one second (FEV1) in mL
• Parameter settings
• True mean under placebo 2200 mL
• Under bronchodilator 2500 mL
• Treatment effect is 300 mL
• SD at outcome and baseline is 150 mL
• Correlation is 0.7
(c) Stephen Senn 2019 43
44. Point estimates and confidence intervals
Baseline values not available (like game 1)
(c) Stephen Senn 2019 44
45. Point estimates and 95% confidence intervals
Baseline values available (Game 2)
(c) Stephen Senn 2019 45
46. How analysis of covariance works
• This shows ANCOVA applied to
sample 170 of the 200 simulated
• There is an imbalance at
baseline
• I have adjusted for this by fitting
two parallel lines
• The difference between the two
estimates show how an outcome
value would change for a given
baseline value if treatments
were switched
(c) Stephen Senn 2019 46
47. Lessons for big data
• We tend to treat observational data-sets as if they were badly
randomised parallel group trials but cluster-randomised trials might
be a better analogy
• True standard errors may be much bigger than estimated ones
• See Cox, Kartsonaki & Keogh (2018) and Xiao-Li Meng (2018)
• Design matters
• Beware of dreams in which mathematics triumphs over biology
• You can be rich in data but poor in information
(c) Stephen Senn 2019 47
48. A big data analyst is an expert at reaching
misleading conclusions with huge data sets,
whereas a statistician can do the same with
small ones
(c) Stephen Senn 48
49. References
(c) Stephen Senn 49
D. R. Cox, C. Kartsonaki and R. H. Keogh (2018) Big data: Some statistical issues. Stat Probab Lett, 111-
115.
X.-L. Meng (2018) Statistical paradises and paradoxes in big data (I): Law of large populations, big
data paradox, and the 2016 US presidential election. The Annals of Applied Statistics, 685-726.
S. J. Senn (2013) Seven myths of randomisation in clinical trials. Statistics in Medicine, 1439-1450.
S. Senn (2013) A Brief Note Regarding Randomization. Perspectives in biology and medicine, 452-453.
S. J. Senn (2019) The well-adjusted statistician. Applied Clinical Trials, June 18.
https://www.appliedclinicaltrialsonline.com/view/well-adjusted-statistician-analysis-covariance-
explained
S. Senn (2019) John Ashworth Nelder. 8 October 1924—7 August 2010: The Royal Society Publishing.
A number of blogs on my blog site are also relevant: http://www.senns.uk/Blogs.html