The experimenters performed a loophole-free Bell test using superconducting qubits separated by 30 meters. They entangled pairs of qubits and measured them in randomly chosen bases over 1 million trials. They found an average S value of 2.0747 ± 0.0033, violating Bell's inequality with a p-value smaller than 10-108, demonstrating non-local quantum correlations. This establishes superconducting circuits as a viable platform for foundational tests of quantum mechanics and applications in quantum information processing.
- The authors performed a loophole-free Bell test experiment using superconducting circuits to violate Bell's inequality. They entangled pairs of qubits over a 30 meter distance and measured them in randomly chosen bases, accumulating over 1 million trials.
- The average S value obtained was 2.0747 ± 0.0033, violating Bell's inequality with a p-value smaller than 10-108, demonstrating non-local quantum correlations between the spatially separated qubits. This establishes superconducting circuits as a viable platform for foundational tests of quantum physics and applications in quantum information.
When spatial data are distributed across multiple servers, there is an obvious difficulty with computing the likelihood function without combining all the data onto one server. Therefore, it would be of interest to compute estimates of the spatial parameters based on decompositions of the spatial held into blocks, each block corresponding to one server. Two methods suggest themselves, a \between blocks" approach in which each block is reduced to a single observation (or a low dimensional summary) to facilitate calculation of a likelihood across blocks, or a within blocks" approach in which the likelihood is calculated for each block and then combined into an overall likelihood for the full process. In fact, I argue that a hybrid approach that combines both ideas is best. Theoretical calculations are provided for the statistical efficiency of each approach. In conclusion, I will present some thoughts for optimal sampling designs with distributed data.
Optimal statistical analysis of Bell experiments
Richard D Gill (Mathematical Institute, Leiden University)
The 2022 Nobel prize in physics went to Clauser, Horne and Zeilinger '“for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”. I played a modest part in the process which led up to that prize by contributing statistical methodology used in four decisive “loophole free" experiments of 2020. What I contributed was quite simply the idea of using randomisation in order to get guaranteed statistical validity, and martingale methods which allowed the experimenters to rule out the notion that an apparent violation of Bell’s inequality could simply be due to time trends in physical parameters over the course of an experiment which takes days to complete (confounding of treatment with time). Most recently I have studied some simple methods to reduce noise in the usual ad hoc estimators of the four correlations which figure in Bell’s inequality. Do not fear: the statistical model is very simple, no knowledge of quantum mechanics is needed to understand the statistical issues. The talk is about the statistical analysis of four 2x2 tables.
https://www.mdpi.com/2673-9909/3/2/23
Development of a test statistic for testing equality of two means under unequ...Alexander Decker
This document proposes a new test statistic for testing the equality of two means from independent samples with unequal variances (known as the Behrens-Fisher problem). It develops a test that uses the harmonic mean of the sample variances instead of the pooled variance. Through simulation, it determines the degrees of freedom for the distribution of the harmonic mean that allows it to be approximated by the chi-square distribution. It then provides an example application to agricultural yield data to demonstrate how the new test statistic can be used.
The document discusses experiments testing Bell's theorem and quantum nonlocality. It notes that while experiments have shown violations of Bell inequalities for over 50 years, all previous experiments suffered from loopholes such as memory effects, detection inefficiency, and limited statistics. The text advocates for "evidence based physics" with rigorous experiments that close all loopholes through randomization, blinding procedures, intention-to-treat analysis, and treating individual time slots rather than particle pairs as the experimental unit. Upcoming experiments in Delft and jointly in Delft and Leiden aim to finally achieve a loophole-free test of Bell inequality violations.
Yet another statistical analysis of the data of the ‘loophole free’ experime...Richard Gill
I presented novel statistical analyses of the data of the famous Bell-inequality experiments of 2015 and 2016: Delft, NIST, Vienna and Munich. Every statistical analysis relies on statistical assumptions. I’ll make the traditional, but questionable, i.i.d. assumptions. They justify a novel (?) analysis which is both simple and (close to) optimal.
It enables us to fairly compare the results of the two main types of experiments: NIST and Vienna CH-Eberhard “one-channel” experiment with target settings and state chosen to optimise the handling of the detection loophole (detector efficiency > 66.7%); Delft and Munich CHSH “two channel” experiments based on entanglement swapping, with the target state and settings which achieve the Tsirelson bound (detector efficiency ≈ 100%).
One cannot say which type of experiment is better without agreeing on how to compromise between the desires to obtain high statistical significance and high physical significance. Moreover, robustness to deviations from traditional assumptions is also an issue.
I also discussed my current opinions on the question: what should we now believe about locality and realism and the foundations of quantum mechanics. My provisional conclusion is "exquisite/angelic spukhafte Fernwerkung" ... but tempered with a quantum Buddhist point of view - nothing is real. This was a talk at the 2019 Växjö conference QIRIF
I discuss various statistical analyses of the recent Bell experiment of Storz et al. (2023, Nature) at ETH Zurich. Both standard and novel analyses under different assumptions result in almost identical conclusions. This suggests strongly that those assumptions are actually satisfied.
This chapter discusses statistical inferences about two populations. It covers testing hypotheses and constructing confidence intervals about:
1) The difference in two population means using the z-statistic and t-statistic.
2) The difference in two related populations when the differences are normally distributed.
3) The difference in two population proportions.
4) Two population variances when the populations are normally distributed.
The chapter presents the z-test for differences in two means and the t-test for independent and related samples. It also discusses tests and intervals for differences in proportions and variances. Sample problems and solutions are provided to illustrate the concepts and computations.
- The authors performed a loophole-free Bell test experiment using superconducting circuits to violate Bell's inequality. They entangled pairs of qubits over a 30 meter distance and measured them in randomly chosen bases, accumulating over 1 million trials.
- The average S value obtained was 2.0747 ± 0.0033, violating Bell's inequality with a p-value smaller than 10-108, demonstrating non-local quantum correlations between the spatially separated qubits. This establishes superconducting circuits as a viable platform for foundational tests of quantum physics and applications in quantum information.
When spatial data are distributed across multiple servers, there is an obvious difficulty with computing the likelihood function without combining all the data onto one server. Therefore, it would be of interest to compute estimates of the spatial parameters based on decompositions of the spatial held into blocks, each block corresponding to one server. Two methods suggest themselves, a \between blocks" approach in which each block is reduced to a single observation (or a low dimensional summary) to facilitate calculation of a likelihood across blocks, or a within blocks" approach in which the likelihood is calculated for each block and then combined into an overall likelihood for the full process. In fact, I argue that a hybrid approach that combines both ideas is best. Theoretical calculations are provided for the statistical efficiency of each approach. In conclusion, I will present some thoughts for optimal sampling designs with distributed data.
Optimal statistical analysis of Bell experiments
Richard D Gill (Mathematical Institute, Leiden University)
The 2022 Nobel prize in physics went to Clauser, Horne and Zeilinger '“for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”. I played a modest part in the process which led up to that prize by contributing statistical methodology used in four decisive “loophole free" experiments of 2020. What I contributed was quite simply the idea of using randomisation in order to get guaranteed statistical validity, and martingale methods which allowed the experimenters to rule out the notion that an apparent violation of Bell’s inequality could simply be due to time trends in physical parameters over the course of an experiment which takes days to complete (confounding of treatment with time). Most recently I have studied some simple methods to reduce noise in the usual ad hoc estimators of the four correlations which figure in Bell’s inequality. Do not fear: the statistical model is very simple, no knowledge of quantum mechanics is needed to understand the statistical issues. The talk is about the statistical analysis of four 2x2 tables.
https://www.mdpi.com/2673-9909/3/2/23
Development of a test statistic for testing equality of two means under unequ...Alexander Decker
This document proposes a new test statistic for testing the equality of two means from independent samples with unequal variances (known as the Behrens-Fisher problem). It develops a test that uses the harmonic mean of the sample variances instead of the pooled variance. Through simulation, it determines the degrees of freedom for the distribution of the harmonic mean that allows it to be approximated by the chi-square distribution. It then provides an example application to agricultural yield data to demonstrate how the new test statistic can be used.
The document discusses experiments testing Bell's theorem and quantum nonlocality. It notes that while experiments have shown violations of Bell inequalities for over 50 years, all previous experiments suffered from loopholes such as memory effects, detection inefficiency, and limited statistics. The text advocates for "evidence based physics" with rigorous experiments that close all loopholes through randomization, blinding procedures, intention-to-treat analysis, and treating individual time slots rather than particle pairs as the experimental unit. Upcoming experiments in Delft and jointly in Delft and Leiden aim to finally achieve a loophole-free test of Bell inequality violations.
Yet another statistical analysis of the data of the ‘loophole free’ experime...Richard Gill
I presented novel statistical analyses of the data of the famous Bell-inequality experiments of 2015 and 2016: Delft, NIST, Vienna and Munich. Every statistical analysis relies on statistical assumptions. I’ll make the traditional, but questionable, i.i.d. assumptions. They justify a novel (?) analysis which is both simple and (close to) optimal.
It enables us to fairly compare the results of the two main types of experiments: NIST and Vienna CH-Eberhard “one-channel” experiment with target settings and state chosen to optimise the handling of the detection loophole (detector efficiency > 66.7%); Delft and Munich CHSH “two channel” experiments based on entanglement swapping, with the target state and settings which achieve the Tsirelson bound (detector efficiency ≈ 100%).
One cannot say which type of experiment is better without agreeing on how to compromise between the desires to obtain high statistical significance and high physical significance. Moreover, robustness to deviations from traditional assumptions is also an issue.
I also discussed my current opinions on the question: what should we now believe about locality and realism and the foundations of quantum mechanics. My provisional conclusion is "exquisite/angelic spukhafte Fernwerkung" ... but tempered with a quantum Buddhist point of view - nothing is real. This was a talk at the 2019 Växjö conference QIRIF
I discuss various statistical analyses of the recent Bell experiment of Storz et al. (2023, Nature) at ETH Zurich. Both standard and novel analyses under different assumptions result in almost identical conclusions. This suggests strongly that those assumptions are actually satisfied.
This chapter discusses statistical inferences about two populations. It covers testing hypotheses and constructing confidence intervals about:
1) The difference in two population means using the z-statistic and t-statistic.
2) The difference in two related populations when the differences are normally distributed.
3) The difference in two population proportions.
4) Two population variances when the populations are normally distributed.
The chapter presents the z-test for differences in two means and the t-test for independent and related samples. It also discusses tests and intervals for differences in proportions and variances. Sample problems and solutions are provided to illustrate the concepts and computations.
The successful loophole-free Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna were milestone achievements. They pushed quantum technology to the limit and paved the way for DIQKD and quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality" (https://cds.cern.ch/record/142461/files/198009299.pdf). Still, there is a vociferous but small community of proponents of local realism, who continue to grasp at the straws offered by some shortcomings of the 2015+ experiments. I'll discuss the loopholes in the loophole-free experiments, explain how the experimenters could have claimed much smaller p-values "for free" by using standard likelihood-based inference, but also relativise the meaning of a 25 standard deviation violation of local realism. I suggest that particle physicists have wisely settled on 5 standard deviations because of Chebyshev's inequality and the fact that 1 / 5 squared = 0.04, just significant at the 5% level.
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
https://arxiv.org/abs/2208.09930 "Kupczynski's contextual setting-dependent parameters offer no escape from Bell-CHSH"
The Student's t-test is used to compare the means of two samples and determine if they are statistically different. In an example, biomass measurements were taken from four replicates each of bacterium A and bacterium B. The mean, variance, and t-value were calculated and found to exceed the critical t-value at a significance level of 5%, indicating the means were significantly different. Therefore, the null hypothesis that the samples do not differ can be rejected.
Presented at Evolution 2013, June 24; describes an approach to teaching populations genetics at the upper undergraduate/beginning graduate level, using simulations based in R and incorporating available large genomic data sets.
This document discusses extracting cosmological parameters from supernova data through cosmography, which involves fitting the Hubble relation to distance-redshift data with minimal theoretical assumptions. It finds that estimates of acceleration are strongly model-dependent based on the distance scale and redshift variable used. While the data suggests an accelerating universe, uncertainties, particularly systematic ones, are large enough that acceleration cannot be considered proven beyond reasonable doubt based on supernova data alone.
This document provides biographical information about Richard Gill and summarizes his interests and career. It lists Gill's family background, education history, current position at Leiden University, and research interests such as cycling, nature, art, Buddhism, fighting injustice, forensic statistics, and quantum foundations. It also briefly describes some controversies Gill has faced in his career regarding criminal investigations and threats of legal action due to his work on scientific integrity and forensic statistics.
Metric Projections to Identify Critical Points in Electric Power Systemstheijes
The identification of weak nodes and branches involved have been analyzed with different technical of analysis as: sensitivities, modal and of the singular minimum value, applying the Jacobian matrix of load flows. We show up a metric projections application to identify weak nodes and branches with more participation in the electric power system.
Molecular dynamics simulations allow researchers to model biomolecular mechanisms across wide ranges of time and length scales. The simulations integrate Newton's laws of motion over discrete timesteps to generate molecular trajectories. Force fields are used to define potential energies and forces in the system. While all-atom representation and applicability to diverse systems are advantages, the simulations are computationally expensive and limited in system size. The document provides examples of using molecular dynamics and small angle X-ray scattering to study protein folding, gene regulation, and protein structural ensembles.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 3: Describing, Exploring, and Comparing Data
3.3: Measures of Relative Standing and Boxplots
This document discusses calibrating photovoltaic (PV) device models from current-voltage (I-V) curve measurements taken under varying irradiance and temperature conditions. It introduces effective irradiance and temperature ratios, F and H, which allow a single diode model to be calibrated using collections of I-V curves. Validation with real and synthetic I-V matrix data shows the model can accurately estimate PV device parameters. Potential applications include using calibrated devices with I-V curve tracers as weather stations to stochastically tune satellite irradiance and temperature datasets.
Conception of a new Syndrome Block for BCH codes with hardware Implementation...IJERA Editor
Error Correcting Codes are required to have a reliable communication through a medium that has an unacceptable bit error rate and low signal to noise ratio. Data gets corrupted during the transmission and reception due to noises and interferences. The Bose, Chaudhuri, and Hocquenghem (BCH) codes are being widely used in variety communication and storage systems. In this paper, a simplified algorithm for BCH decoding is proposed with a view to reduce the number of iterations for error detection in the syndrome calculator block of BCH decoders with a percentage of up to 80 % compared to the basic syndrome block. First, we developed the design of the proposed algorithm second, we generated and simulated the hardware description language source code using Quartus software tools and finally we implemented the new algorithm of syndrome block on FPGA card.
This document proposes a new algorithm for calculating syndromes in Bose-Chaudhuri-Hocquenghem (BCH) error correcting codes. The algorithm aims to reduce the number of iterations required in the syndrome calculation block of BCH decoders. It presents a modified syndrome calculation circuit that uses a factorization method. Simulation results show the new algorithm can reduce the number of iterations by up to 80% compared to the basic syndrome calculation circuit, though it may require more logic gates. The algorithm and its hardware implementation on an FPGA card are described.
The document discusses key aspects of the scientific method including observations, hypotheses, experiments, analysis, and theories. It explains that the scientific method involves making observations, asking questions, developing hypotheses, testing hypotheses through experiments, analyzing data, and drawing conclusions to support or revise hypotheses. The document also covers measurements and units in the metric system, significant figures, and basic calculations involving conversions between units.
Kyeong Soo Kim, "Clock skew compensation algorithm immune to floating-point precision loss," Invited talk, to be delivered at 2022 International Workshop on Mathematics and Its Applications, Chungnam National University (CNU), Daejoen, Korea, August 4-5, 2022.
A non local linear dynamical system and violation of Bell’s inequality.Fausto Intilla
Abstract: A simple classical non-local dynamical system with random initial conditions and an output projecting the state variable on selected axes has been defined to mimic a two-channel quantum coincidence experiment. Non-locality is introduced by a parameter connecting the initial conditions
to the selection of the projection axes. The statistics of the results shows violations up to 100% of the Bell’s inequality, in the form of Clauser-HorneShimony-Holt (CHSH), strongly depending on the non-locality parameter. Discussions on the parallelism with Bohmian mechanics are given.
T. Lucas Makinen x Imperial SBI WorkshopLucasMakinen1
1) The document discusses using neural networks to compress cosmological simulations into informative summaries or statistics in order to perform inference on cosmological parameters.
2) It describes using "Information Maximizing Neural Networks" which are trained to maximize the Fisher information of the summaries with respect to the parameters in order to capture the most cosmologically relevant information.
3) The document also introduces using graph neural networks to represent cosmological simulations as graphs with nodes for halos and edges for their connections, finding that the graph structure captures cosmological information in a way that is insensitive to network architecture details.
The International Journal of Engineering and Science (The IJES)theijes
This document summarizes a research paper that proposes a novel approach to improving the k-means clustering algorithm. The standard k-means algorithm is computationally expensive and produces results that depend heavily on the initial centroid selection. The proposed approach determines initial centroids systematically and uses a heuristic to efficiently assign data points to clusters. It improves both the accuracy and efficiency of k-means clustering by ensuring the entire process takes O(n2) time without sacrificing cluster quality.
1. The document discusses various advanced clustering analysis methods for handling high-dimensional and complex data types.
2. It covers probability-based clustering models, clustering high-dimensional data by addressing challenges like the curse of dimensionality, and clustering graphs and networks.
3. Advanced methods discussed include mixture models, model-based clustering using EM algorithm, subspace clustering to find clusters existing in subspaces, and clustering with constraints.
Investigation on the Pattern Synthesis of Subarray Weights for Low EMI Applic...IOSRJECE
In modern radar applications, it is frequently required to produce sum and difference patterns sequentially. The sum pattern amplitude coefficients are obtained by using Dolph-Chebyshev synthesis method where as the difference pattern excitation coefficients will be optimized in this present work. For this purpose optimal group weights will be introduced to the different array elements to obtain any type of beam depending on the application. Optimization of excitation to the array elements is the main objective so in this process a subarray configuration is adopted. However, Differential Evolution Algorithm is applied for optimization method. The proposed method is reliable and accurate. It is superior to other methods in terms of convergence speed and robustness. Numerical and simulation results are presented.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
The successful loophole-free Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna were milestone achievements. They pushed quantum technology to the limit and paved the way for DIQKD and quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality" (https://cds.cern.ch/record/142461/files/198009299.pdf). Still, there is a vociferous but small community of proponents of local realism, who continue to grasp at the straws offered by some shortcomings of the 2015+ experiments. I'll discuss the loopholes in the loophole-free experiments, explain how the experimenters could have claimed much smaller p-values "for free" by using standard likelihood-based inference, but also relativise the meaning of a 25 standard deviation violation of local realism. I suggest that particle physicists have wisely settled on 5 standard deviations because of Chebyshev's inequality and the fact that 1 / 5 squared = 0.04, just significant at the 5% level.
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
https://arxiv.org/abs/2208.09930 "Kupczynski's contextual setting-dependent parameters offer no escape from Bell-CHSH"
The Student's t-test is used to compare the means of two samples and determine if they are statistically different. In an example, biomass measurements were taken from four replicates each of bacterium A and bacterium B. The mean, variance, and t-value were calculated and found to exceed the critical t-value at a significance level of 5%, indicating the means were significantly different. Therefore, the null hypothesis that the samples do not differ can be rejected.
Presented at Evolution 2013, June 24; describes an approach to teaching populations genetics at the upper undergraduate/beginning graduate level, using simulations based in R and incorporating available large genomic data sets.
This document discusses extracting cosmological parameters from supernova data through cosmography, which involves fitting the Hubble relation to distance-redshift data with minimal theoretical assumptions. It finds that estimates of acceleration are strongly model-dependent based on the distance scale and redshift variable used. While the data suggests an accelerating universe, uncertainties, particularly systematic ones, are large enough that acceleration cannot be considered proven beyond reasonable doubt based on supernova data alone.
This document provides biographical information about Richard Gill and summarizes his interests and career. It lists Gill's family background, education history, current position at Leiden University, and research interests such as cycling, nature, art, Buddhism, fighting injustice, forensic statistics, and quantum foundations. It also briefly describes some controversies Gill has faced in his career regarding criminal investigations and threats of legal action due to his work on scientific integrity and forensic statistics.
Metric Projections to Identify Critical Points in Electric Power Systemstheijes
The identification of weak nodes and branches involved have been analyzed with different technical of analysis as: sensitivities, modal and of the singular minimum value, applying the Jacobian matrix of load flows. We show up a metric projections application to identify weak nodes and branches with more participation in the electric power system.
Molecular dynamics simulations allow researchers to model biomolecular mechanisms across wide ranges of time and length scales. The simulations integrate Newton's laws of motion over discrete timesteps to generate molecular trajectories. Force fields are used to define potential energies and forces in the system. While all-atom representation and applicability to diverse systems are advantages, the simulations are computationally expensive and limited in system size. The document provides examples of using molecular dynamics and small angle X-ray scattering to study protein folding, gene regulation, and protein structural ensembles.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 3: Describing, Exploring, and Comparing Data
3.3: Measures of Relative Standing and Boxplots
This document discusses calibrating photovoltaic (PV) device models from current-voltage (I-V) curve measurements taken under varying irradiance and temperature conditions. It introduces effective irradiance and temperature ratios, F and H, which allow a single diode model to be calibrated using collections of I-V curves. Validation with real and synthetic I-V matrix data shows the model can accurately estimate PV device parameters. Potential applications include using calibrated devices with I-V curve tracers as weather stations to stochastically tune satellite irradiance and temperature datasets.
Conception of a new Syndrome Block for BCH codes with hardware Implementation...IJERA Editor
Error Correcting Codes are required to have a reliable communication through a medium that has an unacceptable bit error rate and low signal to noise ratio. Data gets corrupted during the transmission and reception due to noises and interferences. The Bose, Chaudhuri, and Hocquenghem (BCH) codes are being widely used in variety communication and storage systems. In this paper, a simplified algorithm for BCH decoding is proposed with a view to reduce the number of iterations for error detection in the syndrome calculator block of BCH decoders with a percentage of up to 80 % compared to the basic syndrome block. First, we developed the design of the proposed algorithm second, we generated and simulated the hardware description language source code using Quartus software tools and finally we implemented the new algorithm of syndrome block on FPGA card.
This document proposes a new algorithm for calculating syndromes in Bose-Chaudhuri-Hocquenghem (BCH) error correcting codes. The algorithm aims to reduce the number of iterations required in the syndrome calculation block of BCH decoders. It presents a modified syndrome calculation circuit that uses a factorization method. Simulation results show the new algorithm can reduce the number of iterations by up to 80% compared to the basic syndrome calculation circuit, though it may require more logic gates. The algorithm and its hardware implementation on an FPGA card are described.
The document discusses key aspects of the scientific method including observations, hypotheses, experiments, analysis, and theories. It explains that the scientific method involves making observations, asking questions, developing hypotheses, testing hypotheses through experiments, analyzing data, and drawing conclusions to support or revise hypotheses. The document also covers measurements and units in the metric system, significant figures, and basic calculations involving conversions between units.
Kyeong Soo Kim, "Clock skew compensation algorithm immune to floating-point precision loss," Invited talk, to be delivered at 2022 International Workshop on Mathematics and Its Applications, Chungnam National University (CNU), Daejoen, Korea, August 4-5, 2022.
A non local linear dynamical system and violation of Bell’s inequality.Fausto Intilla
Abstract: A simple classical non-local dynamical system with random initial conditions and an output projecting the state variable on selected axes has been defined to mimic a two-channel quantum coincidence experiment. Non-locality is introduced by a parameter connecting the initial conditions
to the selection of the projection axes. The statistics of the results shows violations up to 100% of the Bell’s inequality, in the form of Clauser-HorneShimony-Holt (CHSH), strongly depending on the non-locality parameter. Discussions on the parallelism with Bohmian mechanics are given.
T. Lucas Makinen x Imperial SBI WorkshopLucasMakinen1
1) The document discusses using neural networks to compress cosmological simulations into informative summaries or statistics in order to perform inference on cosmological parameters.
2) It describes using "Information Maximizing Neural Networks" which are trained to maximize the Fisher information of the summaries with respect to the parameters in order to capture the most cosmologically relevant information.
3) The document also introduces using graph neural networks to represent cosmological simulations as graphs with nodes for halos and edges for their connections, finding that the graph structure captures cosmological information in a way that is insensitive to network architecture details.
The International Journal of Engineering and Science (The IJES)theijes
This document summarizes a research paper that proposes a novel approach to improving the k-means clustering algorithm. The standard k-means algorithm is computationally expensive and produces results that depend heavily on the initial centroid selection. The proposed approach determines initial centroids systematically and uses a heuristic to efficiently assign data points to clusters. It improves both the accuracy and efficiency of k-means clustering by ensuring the entire process takes O(n2) time without sacrificing cluster quality.
1. The document discusses various advanced clustering analysis methods for handling high-dimensional and complex data types.
2. It covers probability-based clustering models, clustering high-dimensional data by addressing challenges like the curse of dimensionality, and clustering graphs and networks.
3. Advanced methods discussed include mixture models, model-based clustering using EM algorithm, subspace clustering to find clusters existing in subspaces, and clustering with constraints.
Investigation on the Pattern Synthesis of Subarray Weights for Low EMI Applic...IOSRJECE
In modern radar applications, it is frequently required to produce sum and difference patterns sequentially. The sum pattern amplitude coefficients are obtained by using Dolph-Chebyshev synthesis method where as the difference pattern excitation coefficients will be optimized in this present work. For this purpose optimal group weights will be introduced to the different array elements to obtain any type of beam depending on the application. Optimization of excitation to the array elements is the main objective so in this process a subarray configuration is adopted. However, Differential Evolution Algorithm is applied for optimization method. The proposed method is reliable and accurate. It is superior to other methods in terms of convergence speed and robustness. Numerical and simulation results are presented.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
A tale of two Lucys - Delft lecture - March 4, 2024Richard Gill
TUDelft Seminar Probability & Statistics, 4 March 2024
15:45 T/M 16:45 - LOCATION: LECTURE HALL D@TA
Lucia de Berk, a Dutch nurse, was arrested in 2001, and tried and convicted of serial murder of patients in her care. At a lower court the only hard evidence against her was the result of a probability calculation: the chance that she was present at so many suspicious deaths and collapses in the hospitals where she had worked was 1 in 342 million. During appeal proceedings at a higher court, the prosecution shifted gears and gave the impression that there was now hard evidence that she had killed one baby. Having established that she was a killer and a liar (she claimed innocence) it was not difficult to pin another 9 deaths and collapses on her. No statistics were needed any more. In 2005 the conviction was confirmed by the supreme court. But at the same time, some whistleblowers started getting attention from the media. A long fight for the hearts and minds of the public, and a long fight to have the case reopened (without any new evidence - only new scientific interpretation of existing evidence) began and ended in 2010 with Lucia’s complete exoneration. A number of statisticians played a big role in that fight. The idea that the conviction was purely based on objective scientific evidence was actually an illusion. This needed to be explained to journalists & to the public. And the judiciary needed to be convinced that something had to be done about it. Lucy Letby, an English nurse, was arrested in 2020 for murder of a large number of babies at a hospital in Chester, UK, in Jan 2015-June 2016. Her trial started in 2022 and took 10 months. She was convicted and given a whole life sentence in 2023. In my opinion, the similarities between the two cases are horrific. Again there is statistical evidence: a cluster of unexplained bad events, and Lucy was there every time; there is apparently irrefutable scientific evidence for two babies; and just like with Lucia de Berk, there are some weird personal and private writings which can be construed as a confession. For many reasons, the chances of a fair retrial for Lucy Letby are very thin indeed, but I am convinced she is innocent and that her trial was grossly unfair.
Subtitle: "Statistical issues in the investigation of a suspected serial killer nurse"
Abstract:
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
https://www.maths.lu.se/kalendarium/?evenemang=statistics-seminar-statistical-issues-investigation-suspected-serial-killer-nurse-richard-gill
Video: https://www.youtube.com/watch?v=RxmFLKTlim8
The RSS has published a report tackling statistical bias in criminal trials where healthcare professionals are accused of murdering patients. Following several high-profile cases where statistical evidence has been misused, the Society calls for all parties in such cases to consult with professional statisticians and use only expert witnesses who are appropriately qualified.
The report, ‘Healthcare serial killer or coincidence?’, is produced by the RSS’s Statistics and the Law Section. The group evolved from a working group of the same name set up in early 2000s after the Society wrote to the Lord Chancellor and made a statement setting out concerns around the use of statistical evidence in the case of Sally Clark.
According to the report, suspicions about medical murder often arise due to a surprising or unexpected series of events, such as an unusual number of deaths among patients under the care of a particular professional.
The RSS has major concerns about use of this kind of evidence in a criminal investigation: first, over the analysis and interpretation of such data, and secondly over whether it can be guaranteed that the data have been compiled in an objective and unbiased manner.
Subtitle: Statistical issues in the investigation of a suspected serial killer nurse
Abstract:
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
Statistical issues in the investigation of a suspected serial killer nurse
Abstract
Investigating a cluster of deaths on a hospital ward is a difficult task for medical investigators, police, and courts. Patients do die in hospitals (the three most common causes of deaths in a hospital are, in order: cancer, heart disease, medical errors). Often such cases come to the attention of police investigators for two reasons: gossip about a particular nurse is circulating just as a couple of unexpected and disturbing events occur. Hospital investigators see a pattern and call in the police.
I will discuss two such cases with which I have been intensively involved. The first one is the case of the Dutch nurse Lucia de Berk. Arrested in 2001, convicted by a succession of courts up to the supreme court by 2005, after which a long fight started to get her a re-trial. She was completely exonerated in 2010. The second case is that of the English nurse Lucy Letby. Arrested in 2018, 2019 and 2020 for murders taking place in 2015 and 2016. Her trial started in 2022 and concluded with a “full life” sentence a couple of months ago.
There are many similarities between the two cases, but also a couple of disturbing differences. One difference being that Lucy Letby’s lawyers seem to have made no attempt whatsoever to defend her. Another difference is that statistics was used against Lucia de Berk but not, apparently, against Lucy Letby. But appearances are not always what they seem.
Report published by Royal Statistical Society on statistical issues in these cases
https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
News feature in “Science” about myself and my work
https://www.science.org/content/article/unlucky-numbers-fighting-murder-convictions-rest-shoddy-stats
Statistics, causality, and the 2022 Nobel prizes in physics.
Richard Gill
Leiden University
The 2022 Nobel prize in physics was awarded to John Clauser, Alain Aspect and Anton Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”. I will explain each of these three gentlemen’s contributions and point out connections to classical statistical causality and probabilistic coupling. It seems that the first commercial application of this work will be a technology called DIQKD: "device independent quantum key distribution". Alice and Bob are far apart and need to establish a shared cryptographic key so as to send one another some securely encrypted messages over public communication channels. How can they create a suitable key while far apart from one another, and only able to communicate using classical means and over public channels?
Healthcare serial killer or coincidence?
Richard Gill
Mathematical Institute, Leiden University
Abstract: The UK’s *Royal Statistical Society* recently published a report tackling statistical bias in criminal trials where healthcare professionals are accused of murdering patients. Following several high-profile cases where statistical evidence has been misused, the Society calls for all parties in such cases to consult with professional statisticians and use only expert witnesses who are appropriately qualified.
The RSS report came out just two weeks before the start of the trial in Manchester of a nurse called Lucy Letby. The trial is still ongoing. So far, neither side has called for evidence from experts in statistics. The core of the prosecution case is that so many odd events connected to nurse LL cannot be a coincidence.
I will discuss the challenges both procedural and conceptual which arise when presenting statistical thinking as evidence in criminal trials. https://rss.org.uk/news-publication/news-publications/2022/section-group-reports/rss-publishes-report-on-dealing-with-uncertainty-i/
The Utrecht University veterinary school was commissioned by the Dutch government to provide objective criteria for breeding short-muzzled dogs. Utrecht proposed 6 external characteristics rated on a traffic light system related to risks of Brachycephalic Obstructive Airway Syndrome and Brachycephalic Ocular Syndrome. These included abnormal breathing, nostril shape, relative muzzle length, nasal folds, eye exposure and eyelid closure. Utrecht determined standards for each characteristic and concluded that dogs meeting certain standards could be used for breeding while those exceeding the standards should not due to increased health risks. Utrecht based their recommendations on scientific studies and expertise in companion animal genetics. However, their criteria are still debated by other scientists and
1) The document discusses Marian Kupczynski's paper on whether John Bell would choose contextuality or nonlocality today based on graphical models representing random variables.
2) It presents a graphical model showing source hidden variables, context dependent instrument hidden variables, and Alice and Bob's outcome variables that may be correlated.
3) It notes that assuming the instrument hidden variables are uncorrelated leads to the CHSH inequality holding, while allowing them to be correlated allows any four joint probability distributions, and discusses Kupczynski's consideration of the detection loophole.
We analyse data from the final two years of a long-running and influential annual Dutch survey of the quality of Dutch New Herring served in large samples of consumer outlets. The data was compiled and analysed by a university econometrician whose findings were publicized in national and international media. This led to the cessation of the survey amid allegations of bias due to a conflict of interest on the part of the leader of the herring tasting team. The survey organizers responded with accusations of failure of scientific integrity. The econometrician was acquitted of wrong-doing by the Dutch authority, whose inquiry nonetheless concluded that further research was needed. We reconstitute the data and uncover its important features which throw new light on the econometrician's findings, focussing on the issue of correlation versus causality: the sample is definitely not a random sample. Taking account both of newly discovered data features and of the sampling mechanism, we conclude that there is no evidence of biased evaluation, despite the econometrician's renewed insistence on his claim.
This year’s Nobel prize in physics: homage to John Bell.
Richard Gill
Mathematical Institute, Leiden University.
Focussing on statistical issues, I will first sketch the history initiated by John Bell’s landmark 1964 paper “On the Einstein Podolsky Rosen paradox”, which led to the 2022 Nobel prize awarded to John Clauser, Alain Aspect and Anton Zeilinger,
https://www.nobelprize.org/prizes/physics/2022/press-release/
A breakthrough in the history was the four successful “loophole-free” Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna. These experiments pushed quantum technology to the limit and paved the way for DIQKD (“Device Independent Quantum Key Distribution”) and a quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality", and depended on brilliant later innovations: Eberhard’s discovery that less entanglement could allow stronger manifestation of quantum non-locality, and Zeilinger’s discovery of quantum teleportation, allowing entanglement between photons to be transferred to entanglement between ions or atoms and ultimately to components of manufactured semi-conductors.
I will also discuss reanalyses of the 2015+ experiments, which could have allowed the experimenters to claim even smaller p-values than the ones they published,
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
This year’s Nobel prize in physics: homage to John Bell.
Richard Gill
Mathematical Institute, Leiden University.
Focussing on statistical issues, I will first sketch the history initiated by John Bell’s landmark 1964 paper “On the Einstein Podolsky Rosen paradox”, which led to the 2022 Nobel prize awarded to John Clauser, Alain Aspect and Anton Zeilinger,
https://www.nobelprize.org/prizes/physics/2022/press-release/
A breakthrough in the history was the four successful “loophole-free” Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna. These experiments pushed quantum technology to the limit and paved the way for DIQKD (“Device Independent Quantum Key Distribution”) and a quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality", and depended on brilliant later innovations: Eberhard’s discovery that less entanglement could allow stronger manifestation of quantum non-locality, and Zeilinger’s discovery of quantum teleportation, allowing entanglement between photons to be transferred to entanglement between ions or atoms and ultimately to components of manufactured semi-conductors.
I will also discuss reanalyses of the 2015+ experiments, which could have allowed the experimenters to claim even smaller p-values than the ones they published,
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
Better slides: https://www.slideshare.net/gill1109/nobelpdf-253673329
Solution to the measurement problem based on Belavkin's theory of Eventum Mechanics. There is only Schrödinger’s equation and a unitary evolution of the wave function of the universe, but we must add a Heisenberg cut to separate the past from the future (to separate particles from waves): Belavkin’s eventum mechanics. The past is a commuting sub-algebra A of the algebra of all observables B, and in the Heisenberg picture, the past history of any observable in A is also in A. Particles have definite trajectories back into the past; Eventum Mechanics defines the probability distributions of future given past. https://arxiv.org/abs/0905.2723 Schrödinger's cat meets Occam's razor (version 3: 10 Aug 2022); to appear in Entropy
The successful loophole-free Bell experiments of 2015 and 2016 in Delft, Munich, NIST and Vienna were milestone achievements. They pushed quantum technology to the limit and paved the way for DIQKD and quantum internet. They were the first successful implementations of the ideal experimental protocol described by John Bell in his 1981 masterpiece "Bertlmann's socks and the nature of reality" (https://cds.cern.ch/record/142461/files/198009299.pdf). Still, there is a vociferous but small community of proponents of local realism, who continue to grasp at the straws offered by some shortcomings of the 2015+ experiments. I'll discuss the loopholes in the loophole-free experiments, explain how the experimenters could have claimed much smaller p-values "for free" by using standard likelihood-based inference, but also relativise the meaning of a 25 standard deviation violation of local realism. I suggest that particle physicists have wisely settled on 5 standard deviations because of Chebyshev's inequality and the fact that 1 / 5 squared = 0.04, just significant at the 5% level.
https://arxiv.org/abs/2209.00702 "Optimal statistical analyses of Bell experiments"
https://arxiv.org/abs/2208.09930 "Kupczynski's contextual setting-dependent parameters offer no escape from Bell-CHSH"
It has long been realized that the mathematical core of Bell's theorem is essentially a classical probabilistic proof that a certain distributed computing task is impossible: namely, the Monte Carlo simulation of certain iconic quantum correlations. I will present a new and simple proof of the theorem using Fourier methods (time series analysis) which should appeal to probabilists and statisticians. I call it Gull's theorem since it was sketched in a conference talk many years ago by astrophysicist Steve Gull, but never published. Indeed, there was a gap in the proof.
The connection with the topic of this session is the following: though a useful quantum computer is perhaps still a dream, many believe that a useful quantum internet is very close indeed. The first application will be: creating shared secret random cryptographic keys which, due to the laws of physics, cannot possibly be known to any other agent. So-called loophole-free Bell experiments have already been used for this purpose.
Like other proofs of Bell's theorem, the proof concerns a thought experiment, and the thought experiment could also in principle be carried out in the lab. This connects to the concept of functional Bell inequalities, whose application in the quantum research lab has not yet been explored. This is again a task for classical statisticians to explore.
R.D. Gill (2022) Gull's theorem revisited, Entropy 2022, 24(5), 679 (11pp.)
https://www.mdpi.com/1099-4300/24/5/679
https://arxiv.org/abs/2012.00719
NB: This is a preliminary version, superceded by my next upload. It has long been realized that the mathematical core of Bell's theorem is essentially a classical probabilistic proof that a certain distributed computing task is impossible: namely, the Monte Carlo simulation of certain iconic quantum correlations. I will present a new and simple proof of the theorem using Fourier methods (time series analysis) which should appeal to probabilists and statisticians. I call it Gull's theorem since it was sketched in a conference talk many years ago by astrophysicist Steve Gull, but never published. Indeed, there was a gap in the proof.
The connection with the topic of this session is the following: though a useful quantum computer is perhaps still a dream, many believe that a useful quantum internet is very close indeed. The first application will be: creating shared secret random cryptographic keys which, due to the laws of physics, cannot possibly be known to any other agent. So-called loophole-free Bell experiments have already been used for this purpose.
Like other proofs of Bell's theorem, the proof concerns a thought experiment, and the thought experiment could also in principle be carried out in the lab. This connects to the concept of functional Bell inequalities, whose application in the quantum research lab has not yet been explored. This is again a task for classical statisticians to explore.
Statistical issues in Serial Killer Nurse casesRichard Gill
- In serial killer nurse cases, clusters of suspicious deaths or incidents are often associated with a particular nurse on duty. However, alternative explanations for such clusters are difficult to rule out given the low base rate of nurses committing murder.
- Statistical evidence plays a key role in these cases but can be misleading if not interpreted carefully. Characteristics of the hospital system and processes of gathering evidence can inadvertently influence statistical analyses.
- Close examination of data in one case found that statistics were selectively reported in ways that exaggerated the nurse's involvement, such as restricting time periods analyzed. Complete data sets have sometimes contradicted initial statistical impressions.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Deep Software Variability and Frictionless Reproducibility
vaxjo2023rdg.pdf
1. Richard D. Gill (Leiden University), 13 June 2023, QIP Växjö
Statistical analysis of [the]
recent Bell experiments
“If your experiment needs statistics, you ought to
have done a better experiment”
2. • RD Gill, Optimal Statistical Analyses of Bell Experiments,
AppliedMath 2023, 3(2), 446-460; https://doi.org/10.3390/
appliedmath3020023
• Storz, S., Schär, J., Kulikov, A. et al. Loophole-free Bell inequality
violation with superconducting circuits. Nature 617, 265–270 (2023).
https://doi.org/10.1038/s41586-023-05885-0
• Giustina M. Superconducting qubits cover new distances. Nature 617
(7960), 254-256. https://doi.org/10.1038/d41586-023-01488-x
• I have promised Marian Kupczynski not to talk about:
RD Gill & JP Lambare, Kupczynski’s Contextual Locally Causal
Probabilistic Models Are Constrained by Bell’s Theorem, Quantum
Rep. 2023, 5(2), 481-495; https://doi.org/10.3390/quantum5020032
Storz et. al. – ETH Zürich – Nature 617, 265–270 (2023)
Loophole-free Bell inequality violation with
superconducting circuits
3. Richard D. Gill, 13 June 2023
On: “Loophole–free Bell
inequality violation
with superconducting circuits”
4. Loophole-freeBellinequalityviolationwith
superconductingcircuits
Simon Storz1✉,Josua Schär1
,Anatoly Kulikov1
,Paul Magnard1,10
,Philipp Kurpiers1,11
,
Janis Lütolf1
,Theo Walter1
,Adrian Copetudo1,12
,Kevin Reuer1
,Abdulkadir Akin1
,
Jean-Claude Besse1
,Mihai Gabureac1
,Graham J. Norris1
,Andrés Rosario1
,Ferran Martin2
,
José Martinez2
,Waldimar Amaya2
,Morgan W. Mitchell3,4
,Carlos Abellan2
,Jean-Daniel Bancal5
,
Nicolas Sangouard5
,Baptiste Royer6,7
,Alexandre Blais7,8
&Andreas Wallraff1,9✉
Superposition,entanglementandnon-localityconstitutefundamentalfeaturesof
quantumphysics.Thefactthatquantumphysicsdoesnotfollowtheprincipleoflocal
causality1–3
canbeexperimentallydemonstratedinBelltests4
performedonpairsof
spatiallyseparated,entangledquantumsystems.AlthoughBelltests,whicharewidely
regardedasalitmustestofquantumphysics,havebeenexploredusingabroadrange
ofquantumsystemsoverthepast50years,onlyrelativelyrecentlyhaveexperiments
freeofso-calledloopholes5
succeeded.Suchexperimentshavebeenperformedwith
spinsinnitrogen–vacancycentres6
,opticalphotons7–9
andneutralatoms10
.Herewe
demonstratealoophole-freeviolationofBell’sinequalitywithsuperconducting
circuits,whichareaprimecontenderforrealizingquantumcomputingtechnology11
.
ToevaluateaClauser–Horne–Shimony–Holt-typeBellinequality4
,wedeterministically
entangleapairofqubits12
andperformfastandhigh-fidelitymeasurements13
along
randomlychosenbasesonthequbitsconnectedthroughacryogeniclink14
spanning
adistanceof30 metres.Evaluatingmorethan1 millionexperimentaltrials,wefindan
averageSvalueof2.0747 ± 0.0033,violatingBell’sinequalitywithaPvaluesmallerthan
10−108
.Ourworkdemonstratesthatnon-localityisaviablenewresourceinquantum
informationtechnologyrealizedwithsuperconductingcircuitswithpotential
applicationsinquantumcommunication,quantumcomputingandfundamental
physics15
.
Oneoftheastoundingfeaturesofquantumphysicsisthatitcontradicts
ourcommonintuitiveunderstandingofnaturefollowingtheprinciple
oflocalcausality1
.Thisconceptderivesfromtheexpectationthatthe
causes of an event are to be found in its neighbourhood (see Supple-
mentaryInformationsectionIforadiscussion).In1964,JohnStewart
Bellproposedanexperiment,nowknownasaBelltest,toempirically
demonstratethattheoriessatisfyingtheprincipleoflocalcausalitydo
notdescribethepropertiesofapairofentangledquantumsystems2,3
.
cannotdependoninformationavailableatthelocationofpartyBand
vice versa, and measurement independence, the idea that the choice
between the two possible measurements is statistically independent
from any hidden variables.
AdecadeafterBell’sproposal,thefirstpioneeringexperimentalBell
tests were successful16,17
. However, these early experiments relied on
additionalassumptions18
,creatingloopholesintheconclusionsdrawn
fromtheexperiments.Inthefollowingdecades,experimentsrelyingon
https://doi.org/10.1038/s41586-023-05885-0
Received: 22 August 2022
Accepted: 24 February 2023
Published online: 10 May 2023
Open access
Check for updates
Nature | Vol 617 | 11 May 2023 | 265
ToevaluateaClauser–Horne–Shimony–Holt-typeBellinequality4
,wedeterministically
entangleapairofqubits12
andperformfastandhigh-fidelitymeasurements13
along
randomlychosenbasesonthequbitsconnectedthroughacryogeniclink14
spanning
adistanceof30 metres.Evaluatingmorethan1 millionexperimentaltrials,wefindan
averageSvalueof2.0747 ± 0.0033,violatingBell’sinequalitywithaPvaluesmallerthan
10−108
.Ourworkdemonstratesthatnon-localityisaviablenewresourceinquantum
informationtechnologyrealizedwithsuperconductingcircuitswithpotential
applicationsinquantumcommunication,quantumcomputingandfundamental
physics15
.
Oneoftheastoundingfeaturesofquantumphysicsisthatitcontradicts
ourcommonintuitiveunderstandingofnaturefollowingtheprinciple
oflocalcausality1
.Thisconceptderivesfromtheexpectationthatthe
causes of an event are to be found in its neighbourhood (see Supple-
mentaryInformationsectionIforadiscussion).In1964,JohnStewart
Bellproposedanexperiment,nowknownasaBelltest,toempirically
demonstratethattheoriessatisfyingtheprincipleoflocalcausalitydo
notdescribethepropertiesofapairofentangledquantumsystems2,3
.
In a Bell test4
, two distinct parties A and B each hold one part of an
entangledquantumsystem,forexample,oneoftwoqubits.Eachparty
then chooses one of two possible measurements to perform on their
qubit, and records the binary measurement outcome. The parties
repeat the process many times to accumulate statistics, and evaluate
aBellinequality2,4
usingthemeasurementchoicesandrecordedresults.
Systems governed by local hidden variable models are expected to
obey the inequality whereas quantum systems can violate it. The two
underlyingassumptionsinthederivationofBell’sinequalityarelocality,
theconceptthatthemeasurementoutcomeatthelocationofpartyA
cannotdependoninformationavailableatthelocationofpartyBand
vice versa, and measurement independence, the idea that the choice
between the two possible measurements is statistically independent
from any hidden variables.
AdecadeafterBell’sproposal,thefirstpioneeringexperimentalBell
tests were successful16,17
. However, these early experiments relied on
additionalassumptions18
,creatingloopholesintheconclusionsdrawn
fromtheexperiments.Inthefollowingdecades,experimentsrelyingon
fewerandfewerassumptionswereperformed19–21
,untilloophole-free
Bell inequality violations, which close all major loopholes simultane-
ously,weredemonstratedin2015andthefollowingyears6–10
;seeref.22
for a discussion.
Inthedevelopmentofquantuminformationscience,itbecameclear
that Bell tests relying on a minimum number of assumptions are not
only of interest for testing fundamental physics but also serve as a
key resource in quantum information processing protocols. Observ-
ing a violation of Bell’s inequality indicates that the system possesses
non-classical correlations, and asserts that the potentially unknown
1
Department of Physics, ETH Zurich, Zurich, Switzerland. 2
Quside Technologies S.L., Castelldefels, Spain. 3
ICFO - Institut de Ciencies Fotoniques, The Barcelona Institute of Science and
Technology, Castelldefels (Barcelona), Spain. 4
ICREA - Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain. 5
Institute of Theoretical Physics, University of Paris-Saclay, CEA,
CNRS, Gif-sur-Yvette, France. 6
Department of Physics, Yale University, New Haven, CT, USA. 7
Institut quantique and Départment de Physique, Université de Sherbrooke, Sherbrooke, Québec,
Canada. 8
Canadian Institute for Advanced Research, Toronto, Ontario, Canada. 9
Quantum Center, ETH Zurich, Zurich, Switzerland. 10
Present address: Alice and Bob, Paris, France. 11
Present address:
Rohde and Schwarz, Munich, Germany. 12
Present address: Centre for Quantum Technologies, National University of Singapore, Singapore, Singapore. ✉e-mail: simon.storz@phys.ethz.ch;
andreas.wallraff@phys.ethz.ch
5. • Evaluating more than 1 million experimental trials, we
fi
nd an
average S value of 2.0747 ± 0.0033, violating Bell’s inequality
with a p–value smaller than 10–108
• For the
fi
nal Bell test with an optimal angle θ (see main text), we
performed n = 220 Bell trials and obtained c = 796228 wins in the
Bell game. With these values we
fi
nd p ≤ 10–108
Notice how close they are …
Two log10 p–values
> pnorm(747/33, lower.tail = FALSE, log.p = TRUE) / log(10)
[1] –113.0221
> pbinom(796228 – 1, 2^20, lower.tail = FALSE, prob = 3/4,
+ log.p = TRUE) / log(10)
[1] –108.6195
2^20 = 1,048,576
6. Counts Na,b,x,y
x = +1
y = +1
x = +1
y = -1
x = -1
y = +1
x = -1
y = -1
a = 0, b = 0 100,529 31,780 29,926 99,965
a = 0, b = 1 30,638 101,342 96,592 33,131
a = 1, b = 0 94,661 30,018 35,565 102,060
a = 1, b = 1 96,291 29,186 32,104 104,788
ABLE SV. Raw counts of the individual occurrences for
al Bell test for fixed o↵set angle ✓ = ⇡/4 with the m
tistics (220
trials), presented in the main text.
e correlators
10 3 3 10
3 10 10 3
10 3 3 10
10 3 3 10
In tens of thousands, rounded
8. a = 0, b = 0 a = 0, b = 1 a = 1, b = 0 a = 1, b = 1
x = +1, y = +1 100529 30638 94661 96291
x = +1, y = –1 31780 101342 30018 29186
x = –1, y = +1 29926 96592 35565 32104
x = –1, y = –1 99965 33131 102060 104788
Zürich, transposed
a’ = 0, b = 0 a’ = 0, b = 1 a’ = 1, b = 0 a’ = 1, b = 1
x = +1, y = +1 94661 96291 100529 30638
x = +1, y = –1 30018 29186 31780 101342
x = –1, y = +1 35565 32104 29926 96592
x = –1, y = –1 102060 104788 99965 33131
Zürich, transposed & Alice’s setting
fl
ipped
10. Inputs
(binary)
Outputs
(binary)
Time
Distance (left to right) is so large that a signal travelling from one side to the other at the speed
of light takes longer than the time interval between input and output on each side
One “go = yes” trial has binary inputs and outputs; model as random variables A, B, X, Y
Image: figure 7 from J.S. Bell (1981), “Bertlmann’s socks and the nature of reality”
17. • Suppose multinomial distribution, condition on 4 counts N(a, b)
• Compute the CHSH statistic S and estimate its standard
deviation in the usual way (four independent binomial counts with
variance estimated by plug-in)
• Compare z-value = S / s.e.(S) with N(0, 1)
• In some circumstances one would prefer Eberhard’s J
• Are there more possibilities? Yes: a 4 dimensional continuum
of alternatives. Why not just pick the best???
Reduce data to 16 counts N(a, b, x, y)
Statistical analysis (Classical method)
∧
∧ ∧ ∧
∧
18. • Suppose multinomial distribution, condition on 4 counts N(a, b)
• We expect p(x | a, b) does not depend on b, and p(y | a, b) does
not depend on a
• If so, 4 empirical “deviations from no-signalling” are noise (mean
= 0). That noise is generally correlated with the noise in the
CHSH statistic S
• Reduce noise in S by subtracting prediction of statistical error,
given statistical errors in no-signalling equalities: 2SLS with plug-
in estimates of variances and covariances
Reduce data to 16 counts N(a, b, x, y)
Statistical analysis (New method 1)
∧
∧
19. • Suppose multinomial distribution, condition on 4 counts N(a, b)
• Estimate the 4 sets of 4 tetranomial probabilities p(x, y | a, b) by
maximum likelihood assuming no-signalling (4 linear equalities)
• (a) Also assuming local realism (8 linear inequalities) and
• (b) Without also assuming local realism
• Test null hypothesis H0: local realism against H1= ¬ H0 using
Wilks’ log likelihood ratio test: Under H0, asymptotically,
–2 ( max{log lik(p) : p ∈ H0 } – max{log lik(p) : p ∈ H0 ∪ H1} )
~ 1/2
𝜒
2(1) + 1/2
𝜒
2(0)
Statistical analysis (New method 2)
Reduce data to 16 counts N(a, b, x, y)
20. • Suppose 16-nomial, i.e., only grand total = 220 is
fi
xed
• Suppose all p(a, b) = 0.25
• Use martingale test (“Bell game”)
• Compute N( = | 1, 1) + N( = | 1, 2) + N( = | 2, 1) + N( ≠ | 2, 2);
compare to Bin(N, 3/4)
Statistical analysis (Bell game)
Reduce data to 16 counts N(a, b, x, y)
21. • The 4 tests are asymptotically equivalent if their model
assumptions are satis
fi
ed and the true probabilities p(x, y | a, b),
p(a, b), have the nice symmetries
Comparison of the 4 p-values
Theory
e f
f e
e f
f e
e f
f e
f e
e f
g g g g
23. • Qutrits’ bases: | gA ⟩, | eA ⟩, | fA ⟩ ; | gB ⟩, | eB ⟩, | fB ⟩
• Ground state g , and
fi
rst two excited states e, f
• We get qutrits AB into state ( | eA, gB ⟩ + | gA, fB ⟩ ) / √2 + noise
• Channel has states | 0 ⟩, | 1 ⟩, | 2 ⟩ , …
• First create Schrödinger cat in A, superposition of two excited states
• One of the excited cats ( f ) emits a photon (microwave pulse) into the
channel and returns to the ground state
• The photon interacts with qutrit B and puts it into excited state f
• Attenuation of microwave plus interaction with environment -> noise
Actually, a tripartite system: qutritA ⨂ channel ⨂ qutritB
The QM stuff
24. The QM stuff
A tripartite system: qutritA ⨂ channel ⨂ qutritB
Create Schrödinger cat in qutrit A (superposition of two excited states e, f)
Actually done in two steps: g ➝ e ➝ (e + f) / √2
Cat “f” moves from qutrit A into channel
Cat “f” moves from channel into qutrit B
THE AUTHOR
|gA, 0, gBi !
1
p
2
⇣
|eA, 0, gBi + |fA, 0, gBi
⌘
!
1
p
2
⇣
|eA, 0, gBi + |gA, 1, gBi
⌘
!
1
p
2
⇣
|eA, 0, gBi + |gA, 0, fBi
⌘
25. • Tim Maudlin, Sabine Hossenfelder, Jonte Hance; Gerard ’t Hooft
• Restrict QM to a countable dense subset of states such that
Bell experiment is impossible: at most three of the four
experiments (
𝛼
i, βj) “exists”
• Paul Raymond-Robichaud
• Careful de
fi
nitions of QM, distinguish two levels (maths,
empirical observations) make it both local and realistic
• The catch: probability distributions of measurement outcomes
are in R-R’s model, individual outcomes are not
Two over-complex solutions to the Bell theorem quandrary
Lost in math
26. • David Oaknin: physicists are forbidden to look at
𝛼
– β
• Karl Hess, Hans de Raedt: physicists must condition on both photons being
observed (the”photon identi
fi
cation loophole”)
• Marian Kupczynski: physicists must use 4 disjoint probability spaces for the
4 sub-experiments
• I am not a physicist! As a mathematician, I do what I like.
• In physics there are no moral constraints on thought experiments
• Counterfactual reasoning is essential to (statistical) science, to law, to morality
• As a statistician, I know that “all models are wrong, some are useful”
Argumentum ab auctoritate
Physics fatwas
27. • Convert a bug into a feature
• The “Heisenberg cut” is for real; where it should be placed is up to the
user of the QM framework
• It must be compatible with the underlying unitary evolution of the world:
the past consists of particles with de
fi
nite trajectories, the future consists
of waves of possibilities. Born’s rule gives us the probability law of the
resulting stochastic process by compounding the conditional
probabilities of the “collapse” in each next in
fi
nitesimal time step, given
the history so far
• A simple solution to what?
• It’s a mathematical solution to the measurement problem. A
mathematical resolution of the Schrödinger cat paradox
Belavkin’s “eventum mechanics”
A simple solution