In this talk I want to explain to you Bell’s inequality and Bell’s theorem. John Bell (1964) literally added a new twist — of 45 degrees, actually — to the famous EPR argument (Einstein, Podolsky, Rosen, 1935), which was supposed to show the incompleteness of quantum mechanics and vanquish Bohr. Bell’s twist apparently showed that either quantum mechanics was wrong, or is shockingly non-local. His findings have been apparently vindicated, in favour of quantum non-locality, by experiment; the most famous being that of Alain Aspect et al. (1982), who tested the so-called Bell-CHSH inequality (Clauser, Horne, Shimony and Holt’s 1969 variant). However that experiment (and all other experiments to date actually!) have slight defects, so-called loopholes. They actually did the wrong experiment, because if they had done the right experiment, they wouldn’t have got the results they were looking for! Only now, 50 years on, are experimenters on the threshold of definitively proving quantum non-locality … as far as experiment ever proves anything. In particular, Delft is in the race, and there are even plans to perform the experiment with Alice and Bob (it’s always about Alice and Bob) located in Leiden and Delft. Enter: probability and statistics. Rutherford famously once said “if you needed statistics, you did the wrong experiment” (Rutherford was wrong, several times in fact).]]>

In this talk I want to explain to you Bell’s inequality and Bell’s theorem. John Bell (1964) literally added a new twist — of 45 degrees, actually — to the famous EPR argument (Einstein, Podolsky, Rosen, 1935), which was supposed to show the incompleteness of quantum mechanics and vanquish Bohr. Bell’s twist apparently showed that either quantum mechanics was wrong, or is shockingly non-local. His findings have been apparently vindicated, in favour of quantum non-locality, by experiment; the most famous being that of Alain Aspect et al. (1982), who tested the so-called Bell-CHSH inequality (Clauser, Horne, Shimony and Holt’s 1969 variant). However that experiment (and all other experiments to date actually!) have slight defects, so-called loopholes. They actually did the wrong experiment, because if they had done the right experiment, they wouldn’t have got the results they were looking for! Only now, 50 years on, are experimenters on the threshold of definitively proving quantum non-locality … as far as experiment ever proves anything. In particular, Delft is in the race, and there are even plans to perform the experiment with Alice and Bob (it’s always about Alice and Bob) located in Leiden and Delft. Enter: probability and statistics. Rutherford famously once said “if you needed statistics, you did the wrong experiment” (Rutherford was wrong, several times in fact).]]>

The fundamental problem ... of the fundamental problem of forensic statistics. Nonparametric Bayes with a 2-parameter Poisson Dirichlet prior, as a solution of the very rare type match problem.]]>

The fundamental problem ... of the fundamental problem of forensic statistics. Nonparametric Bayes with a 2-parameter Poisson Dirichlet prior, as a solution of the very rare type match problem.]]>

I presented novel statistical analyses of the data of the famous Bell-inequality experiments of 2015 and 2016: Delft, NIST, Vienna and Munich. Every statistical analysis relies on statistical assumptions. I’ll make the traditional, but questionable, i.i.d. assumptions. They justify a novel (?) analysis which is both simple and (close to) optimal. It enables us to fairly compare the results of the two main types of experiments: NIST and Vienna CH-Eberhard “one-channel” experiment with target settings and state chosen to optimise the handling of the detection loophole (detector efficiency > 66.7%); Delft and Munich CHSH “two channel” experiments based on entanglement swapping, with the target state and settings which achieve the Tsirelson bound (detector efficiency ≈ 100%). One cannot say which type of experiment is better without agreeing on how to compromise between the desires to obtain high statistical significance and high physical significance. Moreover, robustness to deviations from traditional assumptions is also an issue. I also discussed my current opinions on the question: what should we now believe about locality and realism and the foundations of quantum mechanics. My provisional conclusion is "exquisite/angelic spukhafte Fernwerkung" ... but tempered with a quantum Buddhist point of view - nothing is real. This was a talk at the 2019 Växjö conference QIRIF]]>

I presented novel statistical analyses of the data of the famous Bell-inequality experiments of 2015 and 2016: Delft, NIST, Vienna and Munich. Every statistical analysis relies on statistical assumptions. I’ll make the traditional, but questionable, i.i.d. assumptions. They justify a novel (?) analysis which is both simple and (close to) optimal. It enables us to fairly compare the results of the two main types of experiments: NIST and Vienna CH-Eberhard “one-channel” experiment with target settings and state chosen to optimise the handling of the detection loophole (detector efficiency > 66.7%); Delft and Munich CHSH “two channel” experiments based on entanglement swapping, with the target state and settings which achieve the Tsirelson bound (detector efficiency ≈ 100%). One cannot say which type of experiment is better without agreeing on how to compromise between the desires to obtain high statistical significance and high physical significance. Moreover, robustness to deviations from traditional assumptions is also an issue. I also discussed my current opinions on the question: what should we now believe about locality and realism and the foundations of quantum mechanics. My provisional conclusion is "exquisite/angelic spukhafte Fernwerkung" ... but tempered with a quantum Buddhist point of view - nothing is real. This was a talk at the 2019 Växjö conference QIRIF]]>

I plan to present some simple and as far as I know novel statistical analyses of the data of the famous Bell-type experiments of 2015 and 2016: Delft, NIST, Vienna and Munich. Every statistical analysis relies on statistical assumptions. I’ll make some quite strong (and obviously naive) assumptions which do however justify a very simple but unconventional analysis, and which enable us to compare the results of the two main types of experiments: the traditional Bell-CHSH type experimental set-up but with settings and state chosen to somehow “optimise” the handling of the detection loophole, and the experiments based on entanglement swapping which do however aim at creating the traditionally optimal state and settings for such experiments. One cannot say which type of experiment is better without agreeing on how to compromise between the desires to obtain high statistical significance and high physical significance. I'll also discuss my current opinions on the question: what should we now believe about locality and realism and the foundations of quantum mechanics. My provisional conclusion is "spukhafte Fernwerkung". This is a talk at the 2019 Växjö conference QIRIF ]]>

I plan to present some simple and as far as I know novel statistical analyses of the data of the famous Bell-type experiments of 2015 and 2016: Delft, NIST, Vienna and Munich. Every statistical analysis relies on statistical assumptions. I’ll make some quite strong (and obviously naive) assumptions which do however justify a very simple but unconventional analysis, and which enable us to compare the results of the two main types of experiments: the traditional Bell-CHSH type experimental set-up but with settings and state chosen to somehow “optimise” the handling of the detection loophole, and the experiments based on entanglement swapping which do however aim at creating the traditionally optimal state and settings for such experiments. One cannot say which type of experiment is better without agreeing on how to compromise between the desires to obtain high statistical significance and high physical significance. I'll also discuss my current opinions on the question: what should we now believe about locality and realism and the foundations of quantum mechanics. My provisional conclusion is "spukhafte Fernwerkung". This is a talk at the 2019 Växjö conference QIRIF ]]>

First and incomplete draft of slides of a talk on the 2nd edition of Judea Pearl's "Causality"]]>

First and incomplete draft of slides of a talk on the 2nd edition of Judea Pearl's "Causality"]]>

Not an axiom, and not always true - the most controversial of the "graphoid axioms" of conditional independence. New results from algebraic geometry viewed in the light of mathematical probability theory.]]>

Not an axiom, and not always true - the most controversial of the "graphoid axioms" of conditional independence. New results from algebraic geometry viewed in the light of mathematical probability theory.]]>

Talk about my experiences as a mushroom hunter. Concentrates on an old favourite, fly agaric, and a strange new visitor to the Netherlands: the Jack-O'-Lantern mushroom. Connections to Data Science, Machine Learning, Big Data and AI. Also environmental issues.]]>

Talk about my experiences as a mushroom hunter. Concentrates on an old favourite, fly agaric, and a strange new visitor to the Netherlands: the Jack-O'-Lantern mushroom. Connections to Data Science, Machine Learning, Big Data and AI. Also environmental issues.]]>

Issues of statistics vs causality, and scientific integrity vs scientific competence, in connection with the AD Haringtest]]>

Issues of statistics vs causality, and scientific integrity vs scientific competence, in connection with the AD Haringtest]]>

Talk about my experiences as a mushroom hunter. Concentrates on an old favourite, fly agaric, and a strange new visitor to the Netherlands: the Jack-O'-Lantern mushroom. Added, same day: connections to Data Science, Machine Learning, Big Data and AI But then I discovered that Slideshare no longer allows you to update the *slides* of a talk. I am now looking for an alternative provider, which has heard of the notion of "version control". Email me if you want to see the latest version.]]>

Talk about my experiences as a mushroom hunter. Concentrates on an old favourite, fly agaric, and a strange new visitor to the Netherlands: the Jack-O'-Lantern mushroom. Added, same day: connections to Data Science, Machine Learning, Big Data and AI But then I discovered that Slideshare no longer allows you to update the *slides* of a talk. I am now looking for an alternative provider, which has heard of the notion of "version control". Email me if you want to see the latest version.]]>

Slides for invited talk in session on "Causality" (Section on Statistics in Epidemiology) at JSM, Seattle, 11 August 2015]]>

Slides for invited talk in session on "Causality" (Section on Statistics in Epidemiology) at JSM, Seattle, 11 August 2015]]>

Han Geurdes' slides of his talk at the Växjö conference “Quantum Theory: Advances and Problems”, Monday 10 June, 2013. Reference: doi:10.1016/j.rinp.2014.06.002 Results in Physics Volume 4, 2014, pages 81–82 “A probability loophole in the CHSH” by Han Geurdes. I analyse this alleged disproof of Bell's theorem in my own paper http://arxiv.org/abs/1506.00223]]>

Han Geurdes' slides of his talk at the Växjö conference “Quantum Theory: Advances and Problems”, Monday 10 June, 2013. Reference: doi:10.1016/j.rinp.2014.06.002 Results in Physics Volume 4, 2014, pages 81–82 “A probability loophole in the CHSH” by Han Geurdes. I analyse this alleged disproof of Bell's theorem in my own paper http://arxiv.org/abs/1506.00223]]>

Is there a world-wide epidemic of "health care serial killers" (killer nurses?). Or is there an epidemic of falsely accused health care serial killers? Analysis of the case of Lucia de Berk together with discussion of the role of statistics - in that case, and in forensic statistics in general]]>

Is there a world-wide epidemic of "health care serial killers" (killer nurses?). Or is there an epidemic of falsely accused health care serial killers? Analysis of the case of Lucia de Berk together with discussion of the role of statistics - in that case, and in forensic statistics in general]]>

In this talk I want to explain to you Bell’s inequality and Bell’s theorem. John Bell (1964) literally added a new twist — of 45 degrees, actually — to the famous EPR argument (Einstein, Podolsky, Rosen, 1935), which was supposed to show the incompleteness of quantum mechanics and vanquish Bohr. Bell’s twist apparently showed that either quantum mechanics was wrong, or is shockingly non-local. His findings have been apparently vindicated, in favour of quantum non-locality, by experiment; the most famous being that of Alain Aspect et al. (1982), who tested the so-called Bell-CHSH inequality (Clauser, Horne, Shimony and Holt’s 1969 variant). However that experiment (and all other experiments to date actually!) have slight defects, so-called loopholes. They actually did the wrong experiment, because if they had done the right experiment, they wouldn’t have got the results they were looking for! Only now, 50 years on, are experimenters on the threshold of definitively proving quantum non-locality … as far as experiment ever proves anything. In particular, Delft is in the race, and there are even plans to perform the experiment with Alice and Bob (it’s always about Alice and Bob) located in Leiden and Delft. Enter: probability and statistics. Rutherford famously once said “if you needed statistics, you did the wrong experiment” (Rutherford was wrong, several times in fact).]]>

In this talk I want to explain to you Bell’s inequality and Bell’s theorem. John Bell (1964) literally added a new twist — of 45 degrees, actually — to the famous EPR argument (Einstein, Podolsky, Rosen, 1935), which was supposed to show the incompleteness of quantum mechanics and vanquish Bohr. Bell’s twist apparently showed that either quantum mechanics was wrong, or is shockingly non-local. His findings have been apparently vindicated, in favour of quantum non-locality, by experiment; the most famous being that of Alain Aspect et al. (1982), who tested the so-called Bell-CHSH inequality (Clauser, Horne, Shimony and Holt’s 1969 variant). However that experiment (and all other experiments to date actually!) have slight defects, so-called loopholes. They actually did the wrong experiment, because if they had done the right experiment, they wouldn’t have got the results they were looking for! Only now, 50 years on, are experimenters on the threshold of definitively proving quantum non-locality … as far as experiment ever proves anything. In particular, Delft is in the race, and there are even plans to perform the experiment with Alice and Bob (it’s always about Alice and Bob) located in Leiden and Delft. Enter: probability and statistics. Rutherford famously once said “if you needed statistics, you did the wrong experiment” (Rutherford was wrong, several times in fact).]]>

Bhikkhu Anālayo's powerpoint slides accompanying his talk at Spirit Rock Meditation Center, 2011-10-16, "Mindfulness According to Early Buddhist Sources" (2:37:12) http://www.dharmaseed.org/teacher/439/talk/14214/]]>

Bhikkhu Anālayo's powerpoint slides accompanying his talk at Spirit Rock Meditation Center, 2011-10-16, "Mindfulness According to Early Buddhist Sources" (2:37:12) http://www.dharmaseed.org/teacher/439/talk/14214/]]>

These slides are part of a talk on Buddhism and quantum mechanics. Do they have anything to do with one another? Answer: yes, but not what most people seem to think]]>

These slides are part of a talk on Buddhism and quantum mechanics. Do they have anything to do with one another? Answer: yes, but not what most people seem to think]]>

Introduction to Neyman-Pearson hypothesis testing paradigm]]>

Introduction to Neyman-Pearson hypothesis testing paradigm]]>

A walk in the black forest - during which I explain the fundamental problem of forensic statistics and discuss some new approaches to solving it. ]]>

A walk in the black forest - during which I explain the fundamental problem of forensic statistics and discuss some new approaches to solving it. ]]>

Two new estimators of the evidential value of a rare haplotype match for a Y-STR dna profile are proposed. One is based on the Good-Turing coverage estimator, the other based on Orlitsky et al. MLE of the spectrum of a probability distribution (vector of probabilities ordered from large to small) based on the observed spectrum (vector of observed frequencies ordered from large to small). These two are compared to a model-based approach "discrete Laplace mixture model". "Less is more": it can pay off to discard some of the information (the actual haplotypes) ... the true likelihood ratio is lower, but the precision with which it can be estimated is much, much better.]]>

Two new estimators of the evidential value of a rare haplotype match for a Y-STR dna profile are proposed. One is based on the Good-Turing coverage estimator, the other based on Orlitsky et al. MLE of the spectrum of a probability distribution (vector of probabilities ordered from large to small) based on the observed spectrum (vector of observed frequencies ordered from large to small). These two are compared to a model-based approach "discrete Laplace mixture model". "Less is more": it can pay off to discard some of the information (the actual haplotypes) ... the true likelihood ratio is lower, but the precision with which it can be estimated is much, much better.]]>

Invited talk at 2014 Växjö quantum foundations conference]]>

Invited talk at 2014 Växjö quantum foundations conference]]>

Scientific integrity, scientific fraud, questionable research practices, Smeesters affair, Geraerts affair, Förster affair: lecture at Willem Heiser farewell symposium.]]>

Scientific integrity, scientific fraud, questionable research practices, Smeesters affair, Geraerts affair, Förster affair: lecture at Willem Heiser farewell symposium.]]>

]]>

]]>

]]>

]]>