We analyse data from the final two years of a long-running and influential annual Dutch survey of the quality of Dutch New Herring served in large samples of consumer outlets. The data was compiled and analysed by a university econometrician whose findings were publicized in national and international media. This led to the cessation of the survey amid allegations of bias due to a conflict of interest on the part of the leader of the herring tasting team. The survey organizers responded with accusations of failure of scientific integrity. The econometrician was acquitted of wrong-doing by the Dutch authority, whose inquiry nonetheless concluded that further research was needed. We reconstitute the data and uncover its important features which throw new light on the econometrician's findings, focussing on the issue of correlation versus causality: the sample is definitely not a random sample. Taking account both of newly discovered data features and of the sampling mechanism, we conclude that there is no evidence of biased evaluation, despite the econometrician's renewed insistence on his claim.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
A tale of two Lucys - Delft lecture - March 4, 2024Richard Gill
TUDelft Seminar Probability & Statistics, 4 March 2024
15:45 T/M 16:45 - LOCATION: LECTURE HALL D@TA
Lucia de Berk, a Dutch nurse, was arrested in 2001, and tried and convicted of serial murder of patients in her care. At a lower court the only hard evidence against her was the result of a probability calculation: the chance that she was present at so many suspicious deaths and collapses in the hospitals where she had worked was 1 in 342 million. During appeal proceedings at a higher court, the prosecution shifted gears and gave the impression that there was now hard evidence that she had killed one baby. Having established that she was a killer and a liar (she claimed innocence) it was not difficult to pin another 9 deaths and collapses on her. No statistics were needed any more. In 2005 the conviction was confirmed by the supreme court. But at the same time, some whistleblowers started getting attention from the media. A long fight for the hearts and minds of the public, and a long fight to have the case reopened (without any new evidence - only new scientific interpretation of existing evidence) began and ended in 2010 with Lucia’s complete exoneration. A number of statisticians played a big role in that fight. The idea that the conviction was purely based on objective scientific evidence was actually an illusion. This needed to be explained to journalists & to the public. And the judiciary needed to be convinced that something had to be done about it. Lucy Letby, an English nurse, was arrested in 2020 for murder of a large number of babies at a hospital in Chester, UK, in Jan 2015-June 2016. Her trial started in 2022 and took 10 months. She was convicted and given a whole life sentence in 2023. In my opinion, the similarities between the two cases are horrific. Again there is statistical evidence: a cluster of unexplained bad events, and Lucy was there every time; there is apparently irrefutable scientific evidence for two babies; and just like with Lucia de Berk, there are some weird personal and private writings which can be construed as a confession. For many reasons, the chances of a fair retrial for Lucy Letby are very thin indeed, but I am convinced she is innocent and that her trial was grossly unfair.
Optimal statistical analysis of Bell experiments
Richard D Gill (Mathematical Institute, Leiden University)
The 2022 Nobel prize in physics went to Clauser, Horne and Zeilinger '“for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”. I played a modest part in the process which led up to that prize by contributing statistical methodology used in four decisive “loophole free" experiments of 2020. What I contributed was quite simply the idea of using randomisation in order to get guaranteed statistical validity, and martingale methods which allowed the experimenters to rule out the notion that an apparent violation of Bell’s inequality could simply be due to time trends in physical parameters over the course of an experiment which takes days to complete (confounding of treatment with time). Most recently I have studied some simple methods to reduce noise in the usual ad hoc estimators of the four correlations which figure in Bell’s inequality. Do not fear: the statistical model is very simple, no knowledge of quantum mechanics is needed to understand the statistical issues. The talk is about the statistical analysis of four 2x2 tables.
https://www.mdpi.com/2673-9909/3/2/23
Subtitle: "Statistical issues in the investigation of a suspected serial killer nurse"
Abstract:
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
https://www.maths.lu.se/kalendarium/?evenemang=statistics-seminar-statistical-issues-investigation-suspected-serial-killer-nurse-richard-gill
Video: https://www.youtube.com/watch?v=RxmFLKTlim8
The RSS has published a report tackling statistical bias in criminal trials where healthcare professionals are accused of murdering patients. Following several high-profile cases where statistical evidence has been misused, the Society calls for all parties in such cases to consult with professional statisticians and use only expert witnesses who are appropriately qualified.
The report, ‘Healthcare serial killer or coincidence?’, is produced by the RSS’s Statistics and the Law Section. The group evolved from a working group of the same name set up in early 2000s after the Society wrote to the Lord Chancellor and made a statement setting out concerns around the use of statistical evidence in the case of Sally Clark.
According to the report, suspicions about medical murder often arise due to a surprising or unexpected series of events, such as an unusual number of deaths among patients under the care of a particular professional.
The RSS has major concerns about use of this kind of evidence in a criminal investigation: first, over the analysis and interpretation of such data, and secondly over whether it can be guaranteed that the data have been compiled in an objective and unbiased manner.
Subtitle: Statistical issues in the investigation of a suspected serial killer nurse
Abstract:
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
Statistical issues in the investigation of a suspected serial killer nurse
Abstract
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
A tale of two Lucys - Delft lecture - March 4, 2024Richard Gill
TUDelft Seminar Probability & Statistics, 4 March 2024
15:45 T/M 16:45 - LOCATION: LECTURE HALL D@TA
Lucia de Berk, a Dutch nurse, was arrested in 2001, and tried and convicted of serial murder of patients in her care. At a lower court the only hard evidence against her was the result of a probability calculation: the chance that she was present at so many suspicious deaths and collapses in the hospitals where she had worked was 1 in 342 million. During appeal proceedings at a higher court, the prosecution shifted gears and gave the impression that there was now hard evidence that she had killed one baby. Having established that she was a killer and a liar (she claimed innocence) it was not difficult to pin another 9 deaths and collapses on her. No statistics were needed any more. In 2005 the conviction was confirmed by the supreme court. But at the same time, some whistleblowers started getting attention from the media. A long fight for the hearts and minds of the public, and a long fight to have the case reopened (without any new evidence - only new scientific interpretation of existing evidence) began and ended in 2010 with Lucia’s complete exoneration. A number of statisticians played a big role in that fight. The idea that the conviction was purely based on objective scientific evidence was actually an illusion. This needed to be explained to journalists & to the public. And the judiciary needed to be convinced that something had to be done about it. Lucy Letby, an English nurse, was arrested in 2020 for murder of a large number of babies at a hospital in Chester, UK, in Jan 2015-June 2016. Her trial started in 2022 and took 10 months. She was convicted and given a whole life sentence in 2023. In my opinion, the similarities between the two cases are horrific. Again there is statistical evidence: a cluster of unexplained bad events, and Lucy was there every time; there is apparently irrefutable scientific evidence for two babies; and just like with Lucia de Berk, there are some weird personal and private writings which can be construed as a confession. For many reasons, the chances of a fair retrial for Lucy Letby are very thin indeed, but I am convinced she is innocent and that her trial was grossly unfair.
Optimal statistical analysis of Bell experiments
Richard D Gill (Mathematical Institute, Leiden University)
The 2022 Nobel prize in physics went to Clauser, Horne and Zeilinger '“for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”. I played a modest part in the process which led up to that prize by contributing statistical methodology used in four decisive “loophole free" experiments of 2020. What I contributed was quite simply the idea of using randomisation in order to get guaranteed statistical validity, and martingale methods which allowed the experimenters to rule out the notion that an apparent violation of Bell’s inequality could simply be due to time trends in physical parameters over the course of an experiment which takes days to complete (confounding of treatment with time). Most recently I have studied some simple methods to reduce noise in the usual ad hoc estimators of the four correlations which figure in Bell’s inequality. Do not fear: the statistical model is very simple, no knowledge of quantum mechanics is needed to understand the statistical issues. The talk is about the statistical analysis of four 2x2 tables.
https://www.mdpi.com/2673-9909/3/2/23
Subtitle: "Statistical issues in the investigation of a suspected serial killer nurse"
Abstract:
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
https://www.maths.lu.se/kalendarium/?evenemang=statistics-seminar-statistical-issues-investigation-suspected-serial-killer-nurse-richard-gill
Video: https://www.youtube.com/watch?v=RxmFLKTlim8
The RSS has published a report tackling statistical bias in criminal trials where healthcare professionals are accused of murdering patients. Following several high-profile cases where statistical evidence has been misused, the Society calls for all parties in such cases to consult with professional statisticians and use only expert witnesses who are appropriately qualified.
The report, ‘Healthcare serial killer or coincidence?’, is produced by the RSS’s Statistics and the Law Section. The group evolved from a working group of the same name set up in early 2000s after the Society wrote to the Lord Chancellor and made a statement setting out concerns around the use of statistical evidence in the case of Sally Clark.
According to the report, suspicions about medical murder often arise due to a surprising or unexpected series of events, such as an unusual number of deaths among patients under the care of a particular professional.
The RSS has major concerns about use of this kind of evidence in a criminal investigation: first, over the analysis and interpretation of such data, and secondly over whether it can be guaranteed that the data have been compiled in an objective and unbiased manner.
Subtitle: Statistical issues in the investigation of a suspected serial killer nurse
Abstract:
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
Statistical issues in the investigation of a suspected serial killer nurse
Abstract
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
I discuss various statistical analyses of the recent Bell experiment of Storz et al. (2023, Nature) at ETH Zurich. Both standard and novel analyses under different assumptions result in almost identical conclusions. This suggests strongly that those assumptions are actually satisfied.
The experimenters performed a loophole-free Bell test using superconducting qubits separated by 30 meters. They entangled pairs of qubits and measured them in randomly chosen bases over 1 million trials. They found an average S value of 2.0747 ± 0.0033, violating Bell's inequality with a p-value smaller than 10-108, demonstrating non-local quantum correlations. This establishes superconducting circuits as a viable platform for foundational tests of quantum mechanics and applications in quantum information processing.
- The authors performed a loophole-free Bell test experiment using superconducting circuits to violate Bell's inequality. They entangled pairs of qubits over a 30 meter distance and measured them in randomly chosen bases, accumulating over 1 million trials.
- The average S value obtained was 2.0747 ± 0.0033, violating Bell's inequality with a p-value smaller than 10-108, demonstrating non-local quantum correlations between the spatially separated qubits. This establishes superconducting circuits as a viable platform for foundational tests of quantum physics and applications in quantum information.
Statistics, causality, and the 2022 Nobel prizes in physics.
Richard Gill
Leiden University
The 2022 Nobel prize in physics was awarded to John Clauser, Alain Aspect and Anton Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”. I will explain each of these three gentlemen’s contributions and point out connections to classical statistical causality and probabilistic coupling. It seems that the first commercial application of this work will be a technology called DIQKD: "device independent quantum key distribution". Alice and Bob are far apart and need to establish a shared cryptographic key so as to send one another some securely encrypted messages over public communication channels. How can they create a suitable key while far apart from one another, and only able to communicate using classical means and over public channels?
Healthcare serial killer or coincidence?
Richard Gill
Mathematical Institute, Leiden University
Abstract: The UK’s *Royal Statistical Society* recently published a report tackling statistical bias in criminal trials where healthcare professionals are accused of murdering patients. Following several high-profile cases where statistical evidence has been misused, the Society calls for all parties in such cases to consult with professional statisticians and use only expert witnesses who are appropriately qualified.
The RSS report came out just two weeks before the start of the trial in Manchester of a nurse called Lucy Letby. The trial is still ongoing. So far, neither side has called for evidence from experts in statistics. The core of the prosecution case is that so many odd events connected to nurse LL cannot be a coincidence.
I will discuss the challenges both procedural and conceptual which arise when presenting statistical thinking as evidence in criminal trials. https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
The Utrecht University veterinary school was commissioned by the Dutch government to provide objective criteria for breeding short-muzzled dogs. Utrecht proposed 6 external characteristics rated on a traffic light system related to risks of Brachycephalic Obstructive Airway Syndrome and Brachycephalic Ocular Syndrome. These included abnormal breathing, nostril shape, relative muzzle length, nasal folds, eye exposure and eyelid closure. Utrecht determined standards for each characteristic and concluded that dogs meeting certain standards could be used for breeding while those exceeding the standards should not due to increased health risks. Utrecht based their recommendations on scientific studies and expertise in companion animal genetics. However, their criteria are still debated by other scientists and
1) The document discusses Marian Kupczynski's paper on whether John Bell would choose contextuality or nonlocality today based on graphical models representing random variables.
2) It presents a graphical model showing source hidden variables, context dependent instrument hidden variables, and Alice and Bob's outcome variables that may be correlated.
3) It notes that assuming the instrument hidden variables are uncorrelated leads to the CHSH inequality holding, while allowing them to be correlated allows any four joint probability distributions, and discusses Kupczynski's consideration of the detection loophole.
This year’s Nobel prize in physics: homage to John Bell.
Richard Gill
Mathematical Institute, Leiden University.
Focussing on statistical issues, I will first sketch the history initiated by John Bell’s landmark 1964 paper “On the Einstein Podolsky Rosen paradox”, which led to the 2022 Nobel prize awarded to John Clauser, Alain Aspect and Anton Zeilinger,
https://www.nobelprize.org/prizes/physics/2022/press-release/
A breakthrough in the history was the four successful “loophole-free” Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna. These experiments pushed quantum technology to the limit and paved the way for DIQKD (“Device Independent Quantum Key Distribution”) and a quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality", and depended on brilliant later innovations: Eberhard’s discovery that less entanglement could allow stronger manifestation of quantum non-locality, and Zeilinger’s discovery of quantum teleportation, allowing entanglement between photons to be transferred to entanglement between ions or atoms and ultimately to components of manufactured semi-conductors.
I will also discuss reanalyses of the 2015+ experiments, which could have allowed the experimenters to claim even smaller p-values than the ones they published,
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
This year’s Nobel prize in physics: homage to John Bell.
Richard Gill
Mathematical Institute, Leiden University.
Focussing on statistical issues, I will first sketch the history initiated by John Bell’s landmark 1964 paper “On the Einstein Podolsky Rosen paradox”, which led to the 2022 Nobel prize awarded to John Clauser, Alain Aspect and Anton Zeilinger,
https://www.nobelprize.org/prizes/physics/2022/press-release/
A breakthrough in the history was the four successful “loophole-free” Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna. These experiments pushed quantum technology to the limit and paved the way for DIQKD (“Device Independent Quantum Key Distribution”) and a quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality", and depended on brilliant later innovations: Eberhard’s discovery that less entanglement could allow stronger manifestation of quantum non-locality, and Zeilinger’s discovery of quantum teleportation, allowing entanglement between photons to be transferred to entanglement between ions or atoms and ultimately to components of manufactured semi-conductors.
I will also discuss reanalyses of the 2015+ experiments, which could have allowed the experimenters to claim even smaller p-values than the ones they published,
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
Better slides: https://www.slideshare.net/gill1109/nobelpdf-253673329
Solution to the measurement problem based on Belavkin's theory of Eventum Mechanics. There is only Schrödinger’s equation and a unitary evolution of the wave function of the universe, but we must add a Heisenberg cut to separate the past from the future (to separate particles from waves): Belavkin’s eventum mechanics. The past is a commuting sub-algebra A of the algebra of all observables B, and in the Heisenberg picture, the past history of any observable in A is also in A. Particles have definite trajectories back into the past; Eventum Mechanics defines the probability distributions of future given past. https://arxiv.org/abs/0905.2723 Schrödinger's cat meets Occam's razor (version 3: 10 Aug 2022); to appear in Entropy
The successful loophole-free Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna were milestone achievements. They pushed quantum technology to the limit and paved the way for DIQKD and quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality" (https://cds.cern.ch/record/142461/files/198009299.pdf). Still, there is a vociferous but small community of proponents of local realism, who continue to grasp at the straws offered by some shortcomings of the 2015+ experiments. I'll discuss the loopholes in the loophole-free experiments, explain how the experimenters could have claimed much smaller p-values "for free" by using standard likelihood-based inference, but also relativise the meaning of a 25 standard deviation violation of local realism. I suggest that particle physicists have wisely settled on 5 standard deviations because of Chebyshev's inequality and the fact that 1 / 5 squared = 0.04, just significant at the 5% level.
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
https://arxiv.org/abs/2208.09930 "Kupczynski's contextual setting-dependent parameters offer no escape from Bell-CHSH"
The successful loophole-free Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna were milestone achievements. They pushed quantum technology to the limit and paved the way for DIQKD and quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality" (https://cds.cern.ch/record/142461/files/198009299.pdf). Still, there is a vociferous but small community of proponents of local realism, who continue to grasp at the straws offered by some shortcomings of the 2015+ experiments. I'll discuss the loopholes in the loophole-free experiments, explain how the experimenters could have claimed much smaller p-values "for free" by using standard likelihood-based inference, but also relativise the meaning of a 25 standard deviation violation of local realism. I suggest that particle physicists have wisely settled on 5 standard deviations because of Chebyshev's inequality and the fact that 1 / 5 squared = 0.04, just significant at the 5% level.
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
https://arxiv.org/abs/2208.09930 "Kupczynski's contextual setting-dependent parameters offer no escape from Bell-CHSH"
It has long been realized that the mathematical core of Bell's theorem is essentially a classical probabilistic proof that a certain distributed computing task is impossible: namely, the Monte Carlo simulation of certain iconic quantum correlations. I will present a new and simple proof of the theorem using Fourier methods (time series analysis) which should appeal to probabilists and statisticians. I call it Gull's theorem since it was sketched in a conference talk many years ago by astrophysicist Steve Gull, but never published. Indeed, there was a gap in the proof.
The connection with the topic of this session is the following: though a useful quantum computer is perhaps still a dream, many believe that a useful quantum internet is very close indeed. The first application will be: creating shared secret random cryptographic keys which, due to the laws of physics, cannot possibly be known to any other agent. So-called loophole-free Bell experiments have already been used for this purpose.
Like other proofs of Bell's theorem, the proof concerns a thought experiment, and the thought experiment could also in principle be carried out in the lab. This connects to the concept of functional Bell inequalities, whose application in the quantum research lab has not yet been explored. This is again a task for classical statisticians to explore.
R.D. Gill (2022) Gull's theorem revisited, Entropy 2022, 24(5), 679 (11pp.)
https://www.mdpi.com/1099-4300/24/5/679
https://arxiv.org/abs/2012.00719
NB: This is a preliminary version, superceded by my next upload. It has long been realized that the mathematical core of Bell's theorem is essentially a classical probabilistic proof that a certain distributed computing task is impossible: namely, the Monte Carlo simulation of certain iconic quantum correlations. I will present a new and simple proof of the theorem using Fourier methods (time series analysis) which should appeal to probabilists and statisticians. I call it Gull's theorem since it was sketched in a conference talk many years ago by astrophysicist Steve Gull, but never published. Indeed, there was a gap in the proof.
The connection with the topic of this session is the following: though a useful quantum computer is perhaps still a dream, many believe that a useful quantum internet is very close indeed. The first application will be: creating shared secret random cryptographic keys which, due to the laws of physics, cannot possibly be known to any other agent. So-called loophole-free Bell experiments have already been used for this purpose.
Like other proofs of Bell's theorem, the proof concerns a thought experiment, and the thought experiment could also in principle be carried out in the lab. This connects to the concept of functional Bell inequalities, whose application in the quantum research lab has not yet been explored. This is again a task for classical statisticians to explore.
Statistical issues in Serial Killer Nurse casesRichard Gill
- In serial killer nurse cases, clusters of suspicious deaths or incidents are often associated with a particular nurse on duty. However, alternative explanations for such clusters are difficult to rule out given the low base rate of nurses committing murder.
- Statistical evidence plays a key role in these cases but can be misleading if not interpreted carefully. Characteristics of the hospital system and processes of gathering evidence can inadvertently influence statistical analyses.
- Close examination of data in one case found that statistics were selectively reported in ways that exaggerated the nurse's involvement, such as restricting time periods analyzed. Complete data sets have sometimes contradicted initial statistical impressions.
The article discusses the d'Alembert betting system, one of the most popular systems used in casinos in the 19th century. While the systems appear to guarantee success by equalizing wins and losses over time, they fail to account for the risk of running out of money before wins and losses balance out. However, the systems can provide surprisingly high returns on investment when wins do occur, obscuring the overall negative expected value. The article analyzes how systems like the d'Alembert are seductive due to the potential for large gains despite the inevitability of overall losses in the long run.
Bell experiments, Bell-denialism, and the quantum Randi challenge. Séminaire de Statistique, CREST-CMAP, Lundi 4 octobre 2021 à 14h. The Bell game, and proof of a strong (tail probability inequality) version of CHSH inequality, as used in 2015 loophole free experiments.
In the conviction of Lucia de Berk an important role was played by a simple hypergeometric model, used by the expert consulted by the court, which produced very small probabilities of occurrences of certain numbers of incidents. We want to draw attention to the fact that, if we take into account the variation among nurses in incidents they experience during their shifts, these probabilities can become considerably larger. This points to the danger of using an oversimplified discrete probability model in these circumstances.
This document discusses statistical issues related to cases involving serial killer nurses. It focuses on two specific cases - Lucia de Berk, a Dutch nurse convicted of murder but later exonerated, and Ben Geen, an English nurse convicted of murder and grievous bodily harm. In Ben Geen's case, a statistical expert was not allowed to present evidence that could have shown biases in how suspicious cases were identified. The document also provides background on other cases involving serial killer nurses to highlight the need for better understanding of this phenomenon from both statistical and criminological perspectives. Statisticians involved in such cases need to be aware of potential pitfalls.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
I discuss various statistical analyses of the recent Bell experiment of Storz et al. (2023, Nature) at ETH Zurich. Both standard and novel analyses under different assumptions result in almost identical conclusions. This suggests strongly that those assumptions are actually satisfied.
The experimenters performed a loophole-free Bell test using superconducting qubits separated by 30 meters. They entangled pairs of qubits and measured them in randomly chosen bases over 1 million trials. They found an average S value of 2.0747 ± 0.0033, violating Bell's inequality with a p-value smaller than 10-108, demonstrating non-local quantum correlations. This establishes superconducting circuits as a viable platform for foundational tests of quantum mechanics and applications in quantum information processing.
- The authors performed a loophole-free Bell test experiment using superconducting circuits to violate Bell's inequality. They entangled pairs of qubits over a 30 meter distance and measured them in randomly chosen bases, accumulating over 1 million trials.
- The average S value obtained was 2.0747 ± 0.0033, violating Bell's inequality with a p-value smaller than 10-108, demonstrating non-local quantum correlations between the spatially separated qubits. This establishes superconducting circuits as a viable platform for foundational tests of quantum physics and applications in quantum information.
Statistics, causality, and the 2022 Nobel prizes in physics.
Richard Gill
Leiden University
The 2022 Nobel prize in physics was awarded to John Clauser, Alain Aspect and Anton Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”. I will explain each of these three gentlemen’s contributions and point out connections to classical statistical causality and probabilistic coupling. It seems that the first commercial application of this work will be a technology called DIQKD: "device independent quantum key distribution". Alice and Bob are far apart and need to establish a shared cryptographic key so as to send one another some securely encrypted messages over public communication channels. How can they create a suitable key while far apart from one another, and only able to communicate using classical means and over public channels?
Healthcare serial killer or coincidence?
Richard Gill
Mathematical Institute, Leiden University
Abstract: The UK’s *Royal Statistical Society* recently published a report tackling statistical bias in criminal trials where healthcare professionals are accused of murdering patients. Following several high-profile cases where statistical evidence has been misused, the Society calls for all parties in such cases to consult with professional statisticians and use only expert witnesses who are appropriately qualified.
The RSS report came out just two weeks before the start of the trial in Manchester of a nurse called Lucy Letby. The trial is still ongoing. So far, neither side has called for evidence from experts in statistics. The core of the prosecution case is that so many odd events connected to nurse LL cannot be a coincidence.
I will discuss the challenges both procedural and conceptual which arise when presenting statistical thinking as evidence in criminal trials. https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
The Utrecht University veterinary school was commissioned by the Dutch government to provide objective criteria for breeding short-muzzled dogs. Utrecht proposed 6 external characteristics rated on a traffic light system related to risks of Brachycephalic Obstructive Airway Syndrome and Brachycephalic Ocular Syndrome. These included abnormal breathing, nostril shape, relative muzzle length, nasal folds, eye exposure and eyelid closure. Utrecht determined standards for each characteristic and concluded that dogs meeting certain standards could be used for breeding while those exceeding the standards should not due to increased health risks. Utrecht based their recommendations on scientific studies and expertise in companion animal genetics. However, their criteria are still debated by other scientists and
1) The document discusses Marian Kupczynski's paper on whether John Bell would choose contextuality or nonlocality today based on graphical models representing random variables.
2) It presents a graphical model showing source hidden variables, context dependent instrument hidden variables, and Alice and Bob's outcome variables that may be correlated.
3) It notes that assuming the instrument hidden variables are uncorrelated leads to the CHSH inequality holding, while allowing them to be correlated allows any four joint probability distributions, and discusses Kupczynski's consideration of the detection loophole.
This year’s Nobel prize in physics: homage to John Bell.
Richard Gill
Mathematical Institute, Leiden University.
Focussing on statistical issues, I will first sketch the history initiated by John Bell’s landmark 1964 paper “On the Einstein Podolsky Rosen paradox”, which led to the 2022 Nobel prize awarded to John Clauser, Alain Aspect and Anton Zeilinger,
https://www.nobelprize.org/prizes/physics/2022/press-release/
A breakthrough in the history was the four successful “loophole-free” Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna. These experiments pushed quantum technology to the limit and paved the way for DIQKD (“Device Independent Quantum Key Distribution”) and a quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality", and depended on brilliant later innovations: Eberhard’s discovery that less entanglement could allow stronger manifestation of quantum non-locality, and Zeilinger’s discovery of quantum teleportation, allowing entanglement between photons to be transferred to entanglement between ions or atoms and ultimately to components of manufactured semi-conductors.
I will also discuss reanalyses of the 2015+ experiments, which could have allowed the experimenters to claim even smaller p-values than the ones they published,
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
This year’s Nobel prize in physics: homage to John Bell.
Richard Gill
Mathematical Institute, Leiden University.
Focussing on statistical issues, I will first sketch the history initiated by John Bell’s landmark 1964 paper “On the Einstein Podolsky Rosen paradox”, which led to the 2022 Nobel prize awarded to John Clauser, Alain Aspect and Anton Zeilinger,
https://www.nobelprize.org/prizes/physics/2022/press-release/
A breakthrough in the history was the four successful “loophole-free” Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna. These experiments pushed quantum technology to the limit and paved the way for DIQKD (“Device Independent Quantum Key Distribution”) and a quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality", and depended on brilliant later innovations: Eberhard’s discovery that less entanglement could allow stronger manifestation of quantum non-locality, and Zeilinger’s discovery of quantum teleportation, allowing entanglement between photons to be transferred to entanglement between ions or atoms and ultimately to components of manufactured semi-conductors.
I will also discuss reanalyses of the 2015+ experiments, which could have allowed the experimenters to claim even smaller p-values than the ones they published,
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
Better slides: https://www.slideshare.net/gill1109/nobelpdf-253673329
Solution to the measurement problem based on Belavkin's theory of Eventum Mechanics. There is only Schrödinger’s equation and a unitary evolution of the wave function of the universe, but we must add a Heisenberg cut to separate the past from the future (to separate particles from waves): Belavkin’s eventum mechanics. The past is a commuting sub-algebra A of the algebra of all observables B, and in the Heisenberg picture, the past history of any observable in A is also in A. Particles have definite trajectories back into the past; Eventum Mechanics defines the probability distributions of future given past. https://arxiv.org/abs/0905.2723 Schrödinger's cat meets Occam's razor (version 3: 10 Aug 2022); to appear in Entropy
The successful loophole-free Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna were milestone achievements. They pushed quantum technology to the limit and paved the way for DIQKD and quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality" (https://cds.cern.ch/record/142461/files/198009299.pdf). Still, there is a vociferous but small community of proponents of local realism, who continue to grasp at the straws offered by some shortcomings of the 2015+ experiments. I'll discuss the loopholes in the loophole-free experiments, explain how the experimenters could have claimed much smaller p-values "for free" by using standard likelihood-based inference, but also relativise the meaning of a 25 standard deviation violation of local realism. I suggest that particle physicists have wisely settled on 5 standard deviations because of Chebyshev's inequality and the fact that 1 / 5 squared = 0.04, just significant at the 5% level.
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
https://arxiv.org/abs/2208.09930 "Kupczynski's contextual setting-dependent parameters offer no escape from Bell-CHSH"
The successful loophole-free Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna were milestone achievements. They pushed quantum technology to the limit and paved the way for DIQKD and quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality" (https://cds.cern.ch/record/142461/files/198009299.pdf). Still, there is a vociferous but small community of proponents of local realism, who continue to grasp at the straws offered by some shortcomings of the 2015+ experiments. I'll discuss the loopholes in the loophole-free experiments, explain how the experimenters could have claimed much smaller p-values "for free" by using standard likelihood-based inference, but also relativise the meaning of a 25 standard deviation violation of local realism. I suggest that particle physicists have wisely settled on 5 standard deviations because of Chebyshev's inequality and the fact that 1 / 5 squared = 0.04, just significant at the 5% level.
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
https://arxiv.org/abs/2208.09930 "Kupczynski's contextual setting-dependent parameters offer no escape from Bell-CHSH"
It has long been realized that the mathematical core of Bell's theorem is essentially a classical probabilistic proof that a certain distributed computing task is impossible: namely, the Monte Carlo simulation of certain iconic quantum correlations. I will present a new and simple proof of the theorem using Fourier methods (time series analysis) which should appeal to probabilists and statisticians. I call it Gull's theorem since it was sketched in a conference talk many years ago by astrophysicist Steve Gull, but never published. Indeed, there was a gap in the proof.
The connection with the topic of this session is the following: though a useful quantum computer is perhaps still a dream, many believe that a useful quantum internet is very close indeed. The first application will be: creating shared secret random cryptographic keys which, due to the laws of physics, cannot possibly be known to any other agent. So-called loophole-free Bell experiments have already been used for this purpose.
Like other proofs of Bell's theorem, the proof concerns a thought experiment, and the thought experiment could also in principle be carried out in the lab. This connects to the concept of functional Bell inequalities, whose application in the quantum research lab has not yet been explored. This is again a task for classical statisticians to explore.
R.D. Gill (2022) Gull's theorem revisited, Entropy 2022, 24(5), 679 (11pp.)
https://www.mdpi.com/1099-4300/24/5/679
https://arxiv.org/abs/2012.00719
NB: This is a preliminary version, superceded by my next upload. It has long been realized that the mathematical core of Bell's theorem is essentially a classical probabilistic proof that a certain distributed computing task is impossible: namely, the Monte Carlo simulation of certain iconic quantum correlations. I will present a new and simple proof of the theorem using Fourier methods (time series analysis) which should appeal to probabilists and statisticians. I call it Gull's theorem since it was sketched in a conference talk many years ago by astrophysicist Steve Gull, but never published. Indeed, there was a gap in the proof.
The connection with the topic of this session is the following: though a useful quantum computer is perhaps still a dream, many believe that a useful quantum internet is very close indeed. The first application will be: creating shared secret random cryptographic keys which, due to the laws of physics, cannot possibly be known to any other agent. So-called loophole-free Bell experiments have already been used for this purpose.
Like other proofs of Bell's theorem, the proof concerns a thought experiment, and the thought experiment could also in principle be carried out in the lab. This connects to the concept of functional Bell inequalities, whose application in the quantum research lab has not yet been explored. This is again a task for classical statisticians to explore.
Statistical issues in Serial Killer Nurse casesRichard Gill
- In serial killer nurse cases, clusters of suspicious deaths or incidents are often associated with a particular nurse on duty. However, alternative explanations for such clusters are difficult to rule out given the low base rate of nurses committing murder.
- Statistical evidence plays a key role in these cases but can be misleading if not interpreted carefully. Characteristics of the hospital system and processes of gathering evidence can inadvertently influence statistical analyses.
- Close examination of data in one case found that statistics were selectively reported in ways that exaggerated the nurse's involvement, such as restricting time periods analyzed. Complete data sets have sometimes contradicted initial statistical impressions.
The article discusses the d'Alembert betting system, one of the most popular systems used in casinos in the 19th century. While the systems appear to guarantee success by equalizing wins and losses over time, they fail to account for the risk of running out of money before wins and losses balance out. However, the systems can provide surprisingly high returns on investment when wins do occur, obscuring the overall negative expected value. The article analyzes how systems like the d'Alembert are seductive due to the potential for large gains despite the inevitability of overall losses in the long run.
Bell experiments, Bell-denialism, and the quantum Randi challenge. Séminaire de Statistique, CREST-CMAP, Lundi 4 octobre 2021 à 14h. The Bell game, and proof of a strong (tail probability inequality) version of CHSH inequality, as used in 2015 loophole free experiments.
In the conviction of Lucia de Berk an important role was played by a simple hypergeometric model, used by the expert consulted by the court, which produced very small probabilities of occurrences of certain numbers of incidents. We want to draw attention to the fact that, if we take into account the variation among nurses in incidents they experience during their shifts, these probabilities can become considerably larger. This points to the danger of using an oversimplified discrete probability model in these circumstances.
This document discusses statistical issues related to cases involving serial killer nurses. It focuses on two specific cases - Lucia de Berk, a Dutch nurse convicted of murder but later exonerated, and Ben Geen, an English nurse convicted of murder and grievous bodily harm. In Ben Geen's case, a statistical expert was not allowed to present evidence that could have shown biases in how suspicious cases were identified. The document also provides background on other cases involving serial killer nurses to highlight the need for better understanding of this phenomenon from both statistical and criminological perspectives. Statisticians involved in such cases need to be aware of potential pitfalls.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
1. Richard Gill, Mathematical Institute, Leiden University. 16 November, 2022
The Dutch new herring
scandals
Repeated measurements
with unintended feedback
2.
3. At some point in the latter half of the 14th century Willem Beukelszoon began to go about things differently to other
fi
shermen or
fi
sh curers. He is credited with discovering something called kaken, or “gibbing”: a method of gutting and
deboning herring that left parts of its stomach and another of its internal organs, the pyloric caecae, intact. Removing the
guts and bones took away the bits that would begin to rot
fi
rst, whilst the remaining pyloric caecae would continue to emit
an enzyme called trypsin. The herring would then be thrown into brine and basically, pickle in its own juices.
4. Smell them and weep
Netherlands
fi
shmongers accuse herring-tasters of erring
Can the Dutch still trust their herring-tasters?
Print edition | Europe
Nov 23rd 2017
| AMSTERDAM
HERRING (genus Clupea, with four species found in the Baltic and North Seas) have been vital to northern Europe’s economy since the Middle Ages,
when
fi
shermen worked out how to preserve them in brine. Every north European country maintains that there is a right way to eat the
fi
sh, but they
differ as to what it is. In Sweden Baltic surströmming are fermented until slightly rancid. In Denmark the sill are pickled, or cooked and eaten in long
strips. In the Netherlands haring must be lightly salted for preservation but otherwise raw, dipped in minced onion and accompanied with a pickle. No
food is more loved.
So the Dutch were shocked when accusations surfaced in November that there was something rotten about the national herring test. The test, sponsored
by the Algemeen Dagblad, a newspaper, is carried out by two expert tasters, who each year rate the herring at over a hundred shops and stands across the
country. Ben Vollaard, an economist at Tilburg University, was surprised when his respected local
fi
shmonger scored zero. The merchant told Mr Vollaard
that one judge routinely tipped the scales, giving higher scores to stores that get their
fi
sh from the Atlantic Group, a distributor in Scheveningen. The
judge happened to be a consultant for Atlantic, giving courses on how to slice and serve herring.
“I saw how much damage a low rating could do. The judges act like God,” says Mr Vollaard, who specialises in using statistics to detect crime. He
decided to run the numbers. The ratings include objective criteria, like weight and fattiness, and subjective ones such as taste and appearance. The
economist contacted 85% of the shops surveyed in the past two years and asked who their distributors were. He found that whereas the overall average
score was 5.5, the average for those supplied by Atlantic was 8.7. The extra boost for the Atlantic stores came mainly from the subjective scores.
Mr Vollaard’s study has blown the lid off the sealed world of Dutch herring. Fishmongers who long suspected the judge of bias towards Atlantic now say
the test is rotten. Two who received low ratings have vowed to sue the Algemeen Dagblad for defamation.
The judge and Atlantic say they have been smeared, and that the statistical evidence is a red herring. They say Mr Vollaard’s
fi
gures are off, and that their
high scores are due to their superior
fi
sh. But the charges of belangenverstrengeling (con
fl
ict of interest) have left the test’s reputation for impartiality
gutted.
This article appeared in the Europe section of the print edition under the headline "Failing the smell test"
Print edition | Europe
Nov 23rd 2017
| AMSTERDAM
5. • Celebrating many years collaboration with Per Kragh Andersen
• and Ørnulf Borgan, whose retirement party I had to miss
(pandemic)
• In loving memory of the leader of the pack, Niels Keiding
• May the royalty checks for “the weightlifter’s guide to counting
processes” (ABGK) continue to come in, for many years to
come!
ABGK for ever!
6.
7.
8.
9.
10.
11. • Pitfalls of amateur regression: The Dutch New Herring controversies
• Fengnan Gao & Richard D. Gill
• https://arxiv.org/abs/2104.00333
• Submitted to SJS
• Previously rejected by Statistica Neerlandica, Life Data Analysis, and JRSS(A)
• SJS: “The journal specializes in statistical modelling showing particular
appreciation of the underlying substantive research problems”
• “The emergence of specialized methods for analysing longitudinal and
spatial data is just one example of an area of important methodological
development in which the Scandinavian Journal of Statistics has a particular
niche"
Longitudinal and spatial data with enormous societal impact
Dutch New Herring
12. • AD: Algemene Dagblad, a popular Dutch daily newspaper, based in
Rotterdam
• The Herring Test: a yearly ranking of consumer outlets of “Dutch
New Herring”
• Dutch New Herring (“Hollandse maatjes”): An EU protected
designation for herring prepared according to ancient Dutch tradition
• Ordinary consumers (readers of AD) nominated their local favourite
food market stalls, supermarkets,
fi
sh shops; highly ranked outlets
were automatically included in the next year’s test
• A high ranking was like a Michelin star. A low ranking was the kiss of
death.
Was the AD Herring Test biased?
The substantive research problem
15. The person who started this all
“I heard something strange about the AD Herring Test. I pulled the data together to see how it all
worked. Things came out of that that I thought, ‘this can't be true’. The test panel had a business
interest in a herring supplier, Atlantic. Coincidence or not, whoever got the
fi
sh from Atlantic,
scored signi
fi
cantly higher. I am proud of the fact that my research has helped put an end to this.”
Ben Vollaard (“BV”), Tilburg
16. • BV published statistical analyses as two “working papers” on his
university web pages
• Tilburg University put out a press release, both times
• BV appeared on current a
ff
airs chat shows on national TV, the
results were reported in the newspapers
• The AD cancelled the “AD Herring Test”, brought in lawyers,
making complaints to BV and to his university. Aggrieved herring
outlets started legal action against AD.
• The AD’s lawyer hired me …
Answer: PR activities of his university
How did BV’s work reach The Economist?
17. • Atlantic’s “Dutch New Herring” is objectively the best, according to the well-
known criteria according to which the AD Herring Testers evaluate Dutch new
herring
• Well-cleaned
• Prepared on site
• Neither too “ripe” [matured] nor too “unripe” (cf. venison, whisky, cheese)
• Temperature not too cold, not too warm (cf. hygiene regulations)
• No microbiological contamination
• High fat percentage, high weight (per
fi
sh)
• Atlantic’s Dutch New Herring is also the most expensive! (Euros per gram).
Coincidence?
Coincidence?
No: Atlantic’s Dutch New Herring is the best!
18. No: Atlantic’s Dutch New Herring is the best!
Coincidence?
• Acknowledgement of con
fl
ict of interest: I was paid by AD to help them
in their complaint of violation of scienti
fi
c integrity against Ben Vollaard
• I claim that I gave them my independent scienti
fi
c opinion on their case
• I argued that econometrician Vollaard was incompetent. It is not for me
to judge his integrity
• The Netherlands national organ for evaluation of complaints of
violation of scienti
fi
c integrity ruled that Vollaard was innocent, but that
the scienti
fi
c issues needed further research
• Hence I continued my research with essential collaboration of
Fengnan Gao (replication of entire research project, including data
extraction from original sources)
20. • Positive e
ff
ect, signi
fi
cant at 5% level, size of e
ff
ect ca. 0.5
• Strangely, he also had a covariate “among top 10”
• It was also rather signi
fi
cant, positive
Add indicator variable “< 30 km from Rotterdam’
Vollaard’s first paper
21. • He did not add it as a covariate
• Instead, he showed that the according to his
fi
tted model, the
di
ff
erence between “Atlantic” outlets and the rest is mainly due to
the subjective variables “ripeness” and “cleaning”. Two
objective variables “temperature” and “being freshly served”
have some importance, but less; and the other objective
variables are not important at all.
New variable “outlet supplied by Atlantic”
Vollaard’s second paper
22. Atlantic als leverancier scoren gemiddeld een 9,1. De andere verkooppunten scoren een onvoldoende,
gemiddeld net geen 5,5. We zien weinig verschil in gemiddelde score tussen de overige leveranciers.
Figuur 1. Gemiddeldescorevan verkooppunten op deharingtest 2016/17, naar leverancier
Dit verschil in eindcijfer kan versterkt zijn als in de groep die niet Atlantic als leverancier heeft relatief veel
rotte appels zitten; zaken die echt haring verkopen die niet verkocht zou moeten worden. Deze rotte appels
zouden het gemiddelde sterk omlaag kunnen trekken. Om dit te onderzoeken gebruiken we de
microbiologische gesteldheid van de haring zoals vastgesteld door het laboratorium en vermeld door het
0
1
2
3
4
5
6
7
8
9
10
Atlantic als leverancier Andere leverancier
Score
op
haringtest
0
1
2
3
4
5
6
7
8
9
10
Atlantic als leverancier Andere leverancier
Score
op
haringtest
3,6 punten
verschil
3,3 punten
verschil
Alle verkooppunten Alleen verkooppunten met haring van
voldoende microbiologische gesteldheid
From report 2
n1 + n2 = 292 n1 + n2 = 292 - 37
23. Tabel 1. Waarom verkooppunten dieniet Atlantic alsleverancier hebben lager scoren op deharingtest
Verklaringsfactor
Verschil tussen verkooppunten met
en zonder Atlantic als leverancier Eenheid
Effect op
eindcijfer (0-10)
Gewicht -2.17 gram -0.09
Microbiologische gesteldheid
(zeer) goed referentiecategorie
Voldoende 0.07 aandeel -0.01
Slecht 0.03 aandeel -0.02
waarschuwingsfase 0.09 aandeel -0.01
Afgekeurd 0.02 aandeel -0.04
Gezamenlijk -0.07
Vetpercentage
beneden 10% referentiecategorie
tussen de 10 en 14% -0.21 aandeel -0.04
boven 14% -0.02 aandeel -0.01
Gezamenlijk -0.05
Temperatuur
beneden 7⁰C referentiecategorie
tussen 7 en 10⁰C 0.29 aandeel -0.17
boven 10⁰C 0.21 aandeel -0.36
Gezamenlijk -0.52
Vers van het mes
niet referentiecategorie
wel -0.29 aandeel -0.51
Schoonmaken
zeer goed referentiecategorie
goed 0.27 aandeel -0.22
matig 0.25 aandeel -0.40
slecht 0.04 aandeel -0.12
gezamenlijk -0.73
Rijping
licht referentiecategorie
gemiddeld -0.24 aandeel 0.09
sterk 0.34 aandeel -0.63
bedorven 0.06 aandeel -0.25
gezamenlijk -0.80
Regio
binnen straal van 30 km rond R’dam referentiecategorie
meer dan 30 km van R’dam 0.60 aandeel -0.18
Top 10
top 10 referentiecategorie
25. 0 2 4 6 8 10
0
5
10
15
20
25
30
35
Final test score
Number
of
outlets
26. The model was wrong
• The testers gave a score “zero” if the
fi
sh should not have
been sold
• temperature > 10 deg C
• microbiological contamination = health danger
• maturity = rotten
27. More dif
fi
culties
• Two years data combined, some? many? outlets in both years’
data
• Misclassi
fi
cation of “Atlantic” supplied outlets (deduced from
inconsistency of summary statistics: one outlet got a “6” in
2016)
• Vollaard tested e
ff
ect of “near to Rotterdam” with a dummy
variable, but e
ff
ect of “supplied by Atlantic” with faulty subject
matter argument (according to him, maturity and microbiology
should have been the same). But: “ripening” is a chemical
process, “contamination” is biological
• Dummy variable for “Atlantic” was not signi
fi
cant!
28. • Vollaard would not give us the un-anonymised data
• AD gave us their classi
fi
cation of “Atlantic”
• We went back to the original data
29. Better model
• Score zero = “disquali
fi
ed”
• We study outlets with score >/= 1
• We omit outlets’ 2nd observation if they are tested in both
years
• Result: much better
fi
t, much smaller standard error of
residuals
• Overall picture unchanged
30. More discoveries
• BV had incorrectly merged the lowest (“green”) and the
highest (“rotten”) category of “ripening”
• BV had mapped scores like “7–” and “7+” to 7. We tried
“7 +/– epsilon” for various choices of epsilon (0.01, 0.1).
Tiny changes had dramatic e
ff
ects on statistical
signi
fi
cance of “distance from Rotterdam” and “Atlantic”
• “Rotterdam” went out, “Atlantic” came in!
• Attempts to improve model (e.g., interaction between
most important variables) failed due to multicolinearity
31. Conclusions for
Dutch new herring
• The data was not a random sample. It was self-selected.
Most participants were new participants and often came
from areas where the the AD herring test was little known
• These newcomers often performed poorly
• There is absolutely no evidence of bias toward Atlantic
outlets
• There is little evidence for a regional bias
• “Atlantic” is the best!
32. Take home messages
• In linear regression analysis where multicollinearity is present,
regression estimates are highly sensitive to small perturbations in
model speci
fi
cations
• Too much applied statistics applies a conventional method-of-choice
to a data-set without any thought about the data-generation
mechanism
• What I learned from ABGK: doing science is fun and it’s a social activity
• Scandinavian statistics has found the right balance between theory and
application
• Thanks Per (and also Niels and Ørnulf!)
37. Vollaard’s anonymised & combined data set
AD Herring Test websites 2026, 2017
AD identification of Vollaard’s vendors
tidyverse
dplyr
ggplot2
cbsodataR
ggmap
year
2016
2017
54.59815
148.41316
403.42879
1096.63316
2980.95799
pop. density
by municipality
Venders of 2016 and 2017, Dutch New Herring